text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
int main(int argc, const char * argv[]) { @autoreleasepool { Fraction *a, *b, *c; NSLog(@"Fraction allocated : %i", [Fraction count]); a = [[Fraction allocF]init]; b = [[Fraction allocF]init]; c = [[Fraction allocF]init]; NSLog(@"Fraction allocated : %i", [Fraction count]); } return 0; }
{ "redpajama_set_name": "RedPajamaGithub" }
3,172
Surrey truly does have it all! The restaurants in Surrey, British Columbia will amaze you, and keep you coming back for more. There are so many wonderful eating establishments to choose from, it may become hard to make a decision. So to help you make your dining decision, take a look at the following best restaurants according to Yelp.
{ "redpajama_set_name": "RedPajamaC4" }
2,522
Stephen Douglas Adams (born May 27, 1951) is a former state treasurer of Tennessee, who served in that position for sixteen years, from 1987 to 2003. Early life and education Adams was born in Cornersville in Marshall County, Tennessee. He attended Austin Peay State University. Career Adams joined the Tennessee Department of Conservation in 1973 and left for the Treasury Department in 1975. Adams was elected Tennessee State Treasurer by the state's General Assembly in 1987, following the appointment of Harlan Mathews to Governor Ned McWherter's cabinet. He continued to be reelected another eight times. He was President of the National Association of State Treasurers from 1997 to 1998. Adams resigned from his position as Treasurer on October 24, 2003, to become Chief Administrative Officer of the Tennessee Lottery. He was dismissed in 2006 after allegations of workplace harassment. Adams denied the claims and took legal action against the state lottery to release his employment records. References External links Tennessee State Treasurer 1951 births People from Marshall County, Tennessee Austin Peay State University alumni Tennessee Democrats State treasurers of Tennessee Living people
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,482
Q: jQuery looping animation jumps past animations First, I tried to create a looping animation like this fiddle: function rotator() { $('#foo').stop(true).delay(500).fadeIn('slow').delay(2500).fadeOut('slow',rotator); } $('#foo').hide(); rotator();​ This appears to skip the delay after the first iteration. So I tried this fiddle: function rotator() { $('#foo').stop(true).delay(500).fadeIn('slow').delay(2500).fadeOut('slow',rotator); } $('#foo').hide(); rotator();​ In this case, the fadeIn appears to jump straight to show after the first iteration. I'm stumped. All these operations should be part of the fx queue. And even if there weren't, I can't explain why a fadeIn would change to a show. Any ideas? A: Is this more of the affect you're looking for: function rotator() { $( '#foo' ).delay( 500 ).fadeIn( 'slow', function() { $( this ).delay( 2500 ).fadeOut( 'slow', rotator ); }); } $('#foo').hide(); rotator(); Update: Figure I should explain why you're code was having problems. In jQuery, animations are asynchronous (i.e. non-blocking). So your code was queuing the animations at the same time, but just not to run until sometime in the future. You have to run the following animation in the callback, to ensure that it doesn't fire until the previous animation has completed. Another Update: Just tried the following code and it seemed to work. Give it a shot: function rotator() { $( '#foo' ).delay( 500 ).fadeIn( 'slow' ).delay( 2500 ).fadeOut( 'slow', rotator ); } $('#foo').hide(); rotator(); Give it a go and let me know if it works. A: first thought is that the fadeIn/fadeOuts are asynchronous..so you'd go immediately to the next chained command. For example, in your first set of code, if you did this: function rotator() { $('#foo').stop(true).delay(500).fadeIn('slow', function(){ $('#foo').delay(2500).fadeOut('slow',rotator); }) } $('#foo').hide(); rotator();​ You would see your 2500ms delay.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,270
Marstall in Hannover steht für: Marstall beim Welfenschloss in Hannover-Nordstadt, heute Gebäude der Bibliothek der Universität Hannover Hofmarställe am Hohen Ufer, Bauwerke im Bereich des Hohen Ufers in Hannover Am Marstall (Hannover), Platz in der Altstadt von Hannover
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,606
The Mocha Manual to a Fabulous Pregnancy is a straight-talking handbook to pregnancy with contributions by doctors and personal stories from black women and celebrity moms. Kimberly Seals-Allers offers candid advice on specific health concerns affecting black women such as high blood pressure, sickle cell disease, diabetes, and low birth weight, as well as information about how to get your finances in order, how to cope with embarrassing pigmentation and hair texture changes, single-parenting, maternity fashion, how to deal with demanding jobs and hormone-induced meltdowns. Hip, funny, and refreshingly frank, this book is a must-have for all mothers-to-be. by Kyra Phillips, Jamie Grifo M.D.
{ "redpajama_set_name": "RedPajamaC4" }
513
The Bradley GT II Electric car was built by Bradley Automotive company, based in Minneapolis Minnesota. They were creating kit cars which would bolt on to the top of existing Volkswagen Beetle cars. Bradley Automotive only 50 made of these in 1980.
{ "redpajama_set_name": "RedPajamaC4" }
6,516
Q: Python iteration of list of objects "not iterable" New to Python, but I have been researching this for a couple hours. Forgive me if I missed something obvious. I have a class called LineItem, which has an attribute _lineItems, a list of LineItems that belong to the given LineItem. A sub-list, basically. I want to print out a LineItem and all of its sub-items (and the sub-items own sub-items), but I'm having trouble with the iteration. from decimal import * class LineItem(object): """ Instance attributes: amount: Decimal _lineItems: list of child internal lineitems (possibly an empty list) isInternal: bool """ def __init__(self, **kw): self.amount = Decimal(0) self._lineItems = [] self.isInternal = False for k, v in kw.items(): setattr(self, k, v) An example LineItem that's giving me the trouble is defined below as ext2, with three children. # External line item with one level of children int1 = LineItem(amount=Decimal('1886.75'), description='State Dues', isInternal=True) int2 = LineItem(amount=Decimal('232.50'), description='National Dues', isInternal=True) int3 = LineItem(amount=Decimal('50'), description='Processing Fee', isInternal=True) ext2 = LineItem(amount=Decimal('2169.25'), description='Dues', _lineItems=[int1, int2, int3]) I have this recursive function for iterating all the sub-items (and printing them numbered, like 1, 2, 2.1 as the first subitem of the second item, etc) def print_line_item(LineItems): count = 1 for a in LineItems: print count, ' ', a.description, ' (', a.amount, ')' if a._lineItems != []: for b in a._lineItems: print count, '.', print_line_item(b), count+=1 but when I try to use it def main(): print_line_item([ext1, ext2, ext3]) #ext1 has no children, prints fine if __name__=="__main__": main() I get line 56, in print_line_item print count, '.', print_line_item(b), line 51, in print_line_item for a in LineItems: TypeError: 'LineItem' object is not iterable Okay, so somehow I'm screwing up lists. if I add a couple print statements: def print_line_item(LineItems): count = 1 for a in LineItems: print count, ' ', a.description, ' (', a.amount, ')' if a._lineItems != []: print a._lineItems for b in a._lineItems: print b print count, '.', print_line_item(b), count+=1 I get proof that a._lineItems is indeed a list, printed as follows: [<__main__.LineItem object at 0x0227C430>, <__main__.LineItem object at 0x0227C5F0>, <__main__.LineItem object at 0x0227C670>] and that the b I'm trying to pass to the recursing call is a memory address of a single LineItem <__main__.LineItem object at 0x0227C430> Sooo how am I actually supposed to do what I'm trying to do? I tried a couple things with .iter or ___iter___ but no luck. Also, the if a._lineItems != [] doesn't seem to be working either (nor variations on that). I get printed lines of "None" A: def print_line_item(LineItems): count = 1 for a in LineItems: print count, ' ', a.description, ' (', a.amount, ')' if a._lineItems != []: for b in a._lineItems: print count, '.', print_line_item(b), count+=1 It might be correct version, not tested. def print_line_item(LineItems, precedingNumber='1'): count = 1 for a in LineItems: print precedingNumber, '.', count, ' ', a.description, ' (', a.amount, ')' print_line_item(a._lineItems, precedingNumber + '.' + count), count+=1 A: It makes sense you're getting a not-iterable message--you're essentially recursing into print_line_item for each item in a list--and sooner or later, you'll hit something in a list that isn't iterable itself--and you just go on and call print_line_item() on it, which will try to iterate over it. If you want to ask "is this item a list?" you could use isinstance(some-object, list). Or, if you want to allow for other iterable-but-not-list-things, you can use if isinstance(some-object, collections.Iterable) (you'll have to import collections).
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,337
Want to Achieve Your Goals? Start By Wiping the Slate Clean By Michelle M. Smith August 17, 2015 August 17, 2015 At one time or another, most of us have struggled to complete a task we know we should. We often fall short of our goals, not because they're unattainable, but because we failed to exert the effort required. Katherine Milkman is determined to help us do better next time. The Associate Professor at the University of Pennsylvania's Wharton School researched what she calls the "fresh-start effect" – the energy and determination we feel when we're able to wipe the slate clean. The same momentum that drives us to join the gym in January can be harnessed to help us focus on the pursuit of our goals at other times throughout the year. In these moments when we can begin anew, we have a natural motivation to work harder. An ordinary Monday takes on a new identity when it's framed as an opportunity to correct the shortfalls of the previous week. Powerful leadership implications Understanding what drags employees down, and what can lift them back up with renewed vigor, has the potential to create a more committed and engaged workforce. Workers often feel conflicted when choosing between "wants" and "shoulds," and these conflicts are ever present – catching up with colleagues around the water cooler versus focusing our efforts on meeting deadlines. When leaders wonder why employees aren't succeeding and accomplishing more, one option is that they lack ability. But another very real possibility is that they may just struggle with self-control. The impact of a fresh start People tend to make resolutions and pursue their goals with enhanced vigor at the start of every new year. But the same result can be achieved at the start of any cycle — the beginning of a new week, the start of a new month, or after a holiday. In these fresh-start moments, employees feel more distant from their past failures. The fresh-start effect hinges on the idea that we don't feel as perfect about our past as we'd like. We're always striving to be better. And when we can wipe out those failures and look at a clean slate, it makes us feel more capable, drives us forward, and we redouble our efforts to achieve our goals. For leaders, the best time to encourage your staff to take new steps toward their goals—and to provide the tools they need to achieve them — will be these fresh-start moments, because that's when employees have a natural inclination to put in the extra effort. However, you're not done after finding one fresh-start moment to motivate your troops. You need to understand that motivation tapers off, and you must keep motivating people and keep looking for opportunities to give them the sense of empowerment they need to succeed. Don't manipulate – help employees stay the course One of the nice things about motivation and goals is that, most of the time, what you're trying to encourage employees to do is aligned with what they already want to do, so it doesn't engender a sense of coercion. Remember: You're on target as long as you're using these strategies to encourage behaviors people want to engage in, but are struggling to follow through on. That's when fresh starts work — when people really want to accomplish things but they need that extra motivation to keep going. Michelle M. Smith Named as one of the Ten Best and Brightest Women, one of the 25 Most Influential People in the incentive industry, and selected for the Employee Engagement Power 100 list, Michelle was inducted into the Incentive Marketing Association's Hall of Fame and received their President's and Karen Renk Fellowship Awards. She's a highly accomplished international speaker, author, and strategist on leadership, company culture, workplace trends and employee engagement. Michelle was the Founder and Chair of the Editorial Board of Return on Performance Magazine, and has been featured on Fox Television, the BBC, in Fortune, Business Week, Inc. and other global publications, and contributed to the books Bull Market by Seth Godin, Contented Cows Still Give Better Milk, and Social Media Isn't Social. Connect with her via LinkedIn or Twitter CultureLeadershipOrganizational LeadershipTalent Management
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,487
/* * event_condition.hpp * * Created on: Feb 26, 2014 * Author: wangqiying */ #ifndef EVENT_CONDITION_HPP_ #define EVENT_CONDITION_HPP_ #include <unistd.h> #include <stdint.h> namespace ardb { class EventCondition { private: int m_read_fd; int m_write_fd; volatile uint32_t m_waiting_num; public: EventCondition(); int Wait(); int Notify(); int NotifyAll(); ~EventCondition(); }; } #endif /* EVENT_CONDITION_HPP_ */
{ "redpajama_set_name": "RedPajamaGithub" }
3,045
The ACMA reminds licensees to lodge their community radio licence renewal form - PDF or Word one year before their licence is due to expire. When a renewal application is received, the ACMA conducts an assessment and investigation of the licence.... You can do much the same thing with General Mobile Radio Service - Wikipedia, Family Radio Service - Wikipedia and Multi-Use Radio Service - Wikipedia. These services are free (except for GMRS, but the license is affordable), and much better communications. The purpose of this web service is to give information to radio amateurs about how to get a license when abroad. All the major DXCC countries have been included in this service now.... The ACMA reminds licensees to lodge their community radio licence renewal form - PDF or Word one year before their licence is due to expire. When a renewal application is received, the ACMA conducts an assessment and investigation of the licence. I need an MMSI Do I need a Marine Radio Licence? For an overseas licence holder to obtain a flight crew licence with an aircraft category rating, CASA must be satisfied that the overseas licence is at least equivalent. In addition, you must be able to demonstrate aviation English language proficiency and hold an authorisation to operate a radio. In the UK there are three types of licence, giving different levels of privilege. Foundation Licence. This is the entry level, designed to get you involved in amateur radio as quickly as possible. The purpose of this web service is to give information to radio amateurs about how to get a license when abroad. All the major DXCC countries have been included in this service now.
{ "redpajama_set_name": "RedPajamaC4" }
2,458
You've definitely come to the right place. I love getting clients started on their WordPress journey with the very best setup that's on the market. Of course, we start with the all-powerful *WordPress.org platform. There are 2 different kinds of WordPress out there. The first one is WordPress.com and it is similar to the Blogger platform because it's also free, but it is more in line with the self-hosted WordPress.org without all the cool plugins & control. Below is my step-by-step process for bringing what's in your mind alive on your screen. We'll work side by side to get you where you want to be. Let's hop on a 30 FREE discovery call and talk about your project and what I can do for you! I look forward to working with you! See the "Happy Clients" page for even more happy clients. Jennifer Green, President of Kids' Club of Tarrytown and Sleepy Hollow, Inc. In 2016 I started a new website/blog and decided to stick with Blogger as I had used it successfully for nine years. After designing the header and setting up the blog, I decided to move it to WordPress (which I knew nothing about). I realized I needed help to migrate from Blogger to WordPress and I needed someone who knew WordPress like the back of their hand, someone who could make sure it was running as it should. Enter Rena. I've worked with Rena on all kinds of techy type projects and I can say with full confidence that she is responsible, timely, affordable, knowledgeable willing to take initiative, able to learn on the fly, and has a knack for all those server maintenance and WordPress issues that flouncy designers don't want to deal with. I would recommend her highly! We selected a website template that was fairly restrictive and provided lots of challenges around making changes in design. Rena tackled whatever needed to be handled so that I'd be a satisfied client in terms of the design and the internal mechanisms required to make the site work for my readers. She's always available and reacts quickly to requests and issues. She's knowledgeable about how to help the site increase visibility and how, from a maintenance standpoint, your site can run efficiently over time. Rena tackled whatever needed to be handled so that I'd be a satisfied client in terms of the design and the internal mechanisms required to make the site work for my readers. She's always available and reacts quickly to requests and issues. Working with Rena is such a pleasure. She is helpful and always responds quickly - nothing is too much trouble and no question is too silly. It's a joy to work with someone who obviously loves what they do and is really good at it. Whilst Rena is definitely a Wordpress expert, she's also willing to learn new things and provide creative solutions to issues. Rena has been helping me prepare my first online course for release and her expertise and support - not to mention endless patience - has been instrumental in getting me to the finish line. I've worked with other people on blog, website and course development in the past and was often made to feel like my project wasn't worth their time - you'll never feel that way with Rena - no matter how big or small the request - she will make you feel like a valued customer. She cleaned up my website...(a total disaster with terrible stuff and mistakes), she helped me in every way. I can email her the smallest question and she is back to me in a flash. I cannot recommend more highly Rena...she's a savior. She is reasonable, she is so smart, and she gets all of the ins and out of WordPress. I've been blogging and posting once a week on my Word Press blog, http://boomerhighway.org since 2009. A friend in Iowa set things up for me and became my tech person, my problem solver. But when she could no longer be that person—I started to panic. Enter Rena McDaniel and THE BLOGGING 911. I knew Rena from her posts on THE WOMEN OF MIDLIFE. She had supported the publication of my book of short stories, A MOTHER'S TIME CAPSULE, in 2015, and I had read her empathic posts from THE DIARY OF AN ALZHEIMER CAREGIVER, always feeling a connection as my mother had dementia. Now Rena and THE BLOGGING 911 have me back up and running, all aspects of my blog alive, updated and functioning. It's great. Rena will talk to you about the services she can provide and not only is she totally competent, but her fees are also exceedingly reasonable. I also know that if I want some changes made, she will be there for me. I could not recommend her more highly than I do. Losing readers was a major concern as I made the switch. Rena helped me stay calm, and once again my readers are finding me, reading me, commenting! She answers every email in a VERY timely manner. If find HER a JOY. There is ALWAYS an email shot BACK AT ME saying, "job COMPLETED"!
{ "redpajama_set_name": "RedPajamaC4" }
8,892
var ModuleProvider = (function () { function ModuleProvider($controllerProvider, $provide, $compileProvider, $filterProvider, $injector, $routeProvider) { this._providers = {}; this._modulesLoaded = []; this._providers.$controllerProvider = $controllerProvider; this._providers.$provide = $provide; this._providers.$compileProvider = $compileProvider; this._providers.$filterProvider = $filterProvider; this._providers.$injector = $injector; this._providers.$routeProvider = $routeProvider; this.$get.$inject = ['$q', '$rootScope']; } ModuleProvider.prototype.$get = function ($q, $rootScope) { var _this = this; return { loadModule: function (config) { _this.register(_this._providers, [config.name]); } }; }; ModuleProvider.prototype.register = function (providers, moduleNames) { var i, ii, k, invokeQueue, moduleName, moduleFn, invokeArgs, provider; if (moduleNames) { var runBlocks = []; for (k = moduleNames.length - 1; k >= 0; k--) { moduleName = moduleNames[k]; moduleFn = angular.module(moduleName); if (this._modulesLoaded.indexOf(moduleName) >= 0) { continue; } this._modulesLoaded.push(moduleName); runBlocks = runBlocks.concat(moduleFn._runBlocks); try { for (invokeQueue = moduleFn._invokeQueue, i = 0, ii = invokeQueue.length; i < ii; i++) { invokeArgs = invokeQueue[i]; if (providers.hasOwnProperty(invokeArgs[0])) { provider = providers[invokeArgs[0]]; } else { return 'Unknow provider!'; } provider[invokeArgs[1]].apply(provider, invokeArgs[2]); } for (var configIndex = 0; configIndex < moduleFn._configBlocks.length; configIndex++) { var config = moduleFn._configBlocks[configIndex]; providers[config[0]][config[1]](config[2][0][1]); } } catch (e) { if (e.message) { e.message += ' from ' + moduleName; } return e.message; throw e; } moduleNames.pop(); } angular.forEach(runBlocks, function (fn) { providers.$injector.invoke(fn); }); } return null; }; return ModuleProvider; })(); ModuleProvider.$inject = ['$controllerProvider', '$provide', '$compileProvider', '$filterProvider', '$injector', '$routeProvider']; module.exports = ModuleProvider; //# sourceMappingURL=app.moduleProvider.js.map
{ "redpajama_set_name": "RedPajamaGithub" }
9,546
Fresh questions for Amazon over pittance it pays in tax [UK] You are being redirected to the story the piece of content is found in so you can read it in context. Please click the following link if you are not automatically redirected within a couple seconds: en/uk-mps-ready-to-haul-amazon-back-to-parliament-for-questions-after-investigation-suggests-it-is-pushing-the-tax-rulebook-to-its-limits-includes-company-comments#c72652 UK: MPs ready to "haul Amazon back to parliament" for questions after investigation suggests it is "pushing the tax rulebook to its limits" - includes company comments Author: Ian Griffiths & Simon Bowers, Guardian [UK], Published on: 16 May 2013 MPs are ready to haul Amazon back to parliament to answer new questions about its tax status in Britain after a Guardian investigation's findings suggest the online retailer is pushing the tax rulebook to its limits to minimise its tax bill. Company filings showed Amazon's main UK company paid just £3.2m in corporation tax on sales of £320m last year. However, the Seattle-based group has told investors its 2012 UK sales were £4.2bn. The Guardian investigation has found…tax authorities unable, or unwilling, to prevent the imposition of aggressive tax avoidance structures…An Amazon spokesman said: "Amazon pays all applicable taxes in every jurisdiction that it operates within. Amazon EU serves tens of millions of customers and sellers throughout Europe from multiple consumer websites in a number of languages dispatching products to all 27 countries in the EU. We have a single European headquarters in Luxembourg with hundreds of employees to manage this complex operation." [also refers to Google, Ernst & Young, Related companies: Amazon.com EY (Ernst & Young) Google (part of Alphabet) Auditing, consulting & accounting  UK corporate tax avoidance - 2012-2013  Amazon.com  EY (Ernst & Young)  Google (part of Alphabet)
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,601
{"url":"https:\/\/sherpa-team.gitlab.io\/sherpa\/v3.0.0alpha1\/manual\/parameters\/models.html","text":"# 5.4. Models\u00b6\n\nThe main switch MODEL sets the model that Sherpa uses throughout the simulation run. The default is SM, the built-in Standard Model implementation of Sherpa. For BSM simulations, Sherpa offers an option to use the Universal FeynRules Output Format (UFO) [DDF+12].\n\nPlease note: AMEGIC can only be used for the built-in models (SM and HEFT). For anything else, please use Comix.\n\n## 5.4.1. Built-in Models\u00b6\n\n### 5.4.1.1. Standard Model\u00b6\n\nThe SM inputs for the electroweak sector can be given in nine different schemes, that correspond to different choices of which SM physics parameters are considered fixed and which are derived from the given quantities. The electro-weak coupling is by default fixed, unless its running has been enabled (cf. COUPLINGS). The input schemes are selected through the EW_SCHEME parameter, whose default is alphamZ. The following options are provided:\n\nUserDefined\n\nAll EW parameters are explicitly given: Here the W, Z and Higgs masses and widths are taken as inputs, and the parameters 1\/ALPHAQED(0), ALPHAQED_DEFAULT_SCALE, SIN2THETAW (weak mixing angle), VEV (Higgs field vacuum expectation value) and LAMBDA (Higgs quartic coupling) have to be specified.\n\nBy default, ALPHAQED_DEFAULT_SCALE: 8315.18 ($$=m_Z^2$$), which means that the MEs are evaluated with a value of $$\\alpha=\\frac{1}{128.802}$$.\n\nNote that this mode allows to violate the tree-level relations between some of the parameters and might thus lead to gauge violations in some regions of phase space.\n\nalpha0\n\nAll EW parameters are calculated from the W, Z and Higgs masses and widths and the fine structure constant (taken from 1\/ALPHAQED(0) + ALPHAQED_DEFAULT_SCALE, cf. below) using tree-level relations.\n\nBy default, ALPHAQED_DEFAULT_SCALE: 0.0, which means that the MEs are evaluated with a value of $$\\alpha=\\frac{1}{137.03599976}$$.\n\nalphamZ\n\nAll EW parameters are calculated from the W, Z and Higgs masses and widths and the fine structure constant (taken from 1\/ALPHAQED(MZ), default 128.802) using tree-level relations.\n\nGmu\n\nThis choice corresponds to the G_mu-scheme. The EW parameters are calculated out of the weak gauge boson masses M_W, M_Z, the Higgs boson mass M_H, their respective widths, and the Fermi constant GF using tree-level relations.\n\nalphamZsW\n\nAll EW parameters are calculated from the Z and Higgs masses and widths, the fine structure constant (taken from 1\/ALPHAQED(MZ), default 128.802), and the weak mixing angle (SIN2THETAW) using tree-level relations. In particular, the W boson mass (and in the complex mass scheme also its width) is a derived quantity.\n\nalphamWsW\n\nAll EW parameters are calculated from the W and Higgs masses and widths, the fine structure constant (taken from 1\/ALPHAQED(MW), default 132.17), and the weak mixing angle (SIN2THETAW) using tree-level relations. In particular, the Z boson mass (and in the complex mass scheme also its width) is a derived quantity.\n\nGmumZsW\n\nAll EW parameters are calculated from the Z and Higgs masses and widths, the Fermi constant (GF), and the weak mixing angle (SIN2THETAW) using tree-level relations. In particular, the W boson mass (and in the complex mass scheme also its width) is a derived quantity.\n\nGmumWsW\n\nAll EW parameters are calculated from the W and Higgs masses and widths, the Fermi constant (GF), and the weak mixing angle (SIN2THETAW) using tree-level relations. In particular, the Z boson mass (and in the complex mass scheme also its width) is a derived quantity.\n\nFeynRules\n\nThis choice corresponds to the scheme employed in the FeynRules\/UFO setup. The EW parameters are calculated out of the Z boson mass M_Z, the Higgs boson mass M_H, the Fermi constant GF and the fine structure constant (taken from 1\/ALPHAQED(0) + ALPHAQED_DEFAULT_SCALE, cf. below) using tree-level relations. Note, the W boson mass is not an input parameter in this scheme.\n\nAll Gmu-derived schemes, where the EW coupling is a derived quantity, possess an ambiguity on how to construct a real EW coupling in the complex mass scheme. Several conventions are implemented and can be accessed through GMU_CMS_AQED_CONVENTION.\n\nTo account for quark mixing the CKM matrix elements have to be assigned. For this purpose the Wolfenstein parametrization [Wol83] is employed. The order of expansion in the lambda parameter is defined through\n\nCKM:\nOrder: <order>\n# other CKM settings ...\n\n\nThe default for Order is 0, corresponding to a unit matrix. The parameter convention for higher expansion terms reads:\n\n\u2022 Order: 1, the Cabibbo subsetting has to be set, it parametrizes lambda and has the default value 0.22537.\n\n\u2022 Order: 2, in addition the value of CKM_A has to be set, its default is 0.814.\n\n\u2022 Order: 3, the order lambda^3 expansion, Eta and Rho have to be specified. Their default values are 0.353 and 0.117, respectively.\n\nThe CKM matrix elements V_ij can also be read in using\n\nCKM:\nMatrix_Elements:\ni,j: <V_ij>\n# other CKM matrix elements ...\n# other CKM settings ...\n\n\nComplex values can be given by providing two values: <V_ij> -> [Re, Im]. Values not explicitly given are taken from the afore computed Wolfenstein parametrisation. Setting CKM: {Output: true} enables an output of the CKM matrix.\n\nThe remaining parameter to fully specify the Standard Model is the strong coupling constant at the Z-pole, given through ALPHAS(MZ). Its default value is 0.118. If the setup at hand involves hadron collisions and thus PDFs, the value of the strong coupling constant is automatically set consistent with the PDF fit and can not be changed by the user. If Sherpa is compiled with LHAPDF support, it is also possible to use the alphaS evolution provided in LHAPDF by specifying ALPHAS: {USE_PDF: 1}. The perturbative order of the running of the strong coupling can be set via ORDER_ALPHAS, where the default 0 corresponds to one-loop running and 1, 2, 3 to 2,3,4-loops, respectively. If the setup at hand involves PDFs, this parameter is set consistent with the information provided by the PDF set.\n\nIf unstable particles (e.g. W\/Z bosons) appear as intermediate propagators in the process, Sherpa uses the complex mass scheme to construct MEs in a gauge-invariant way. For full consistency with this scheme, by default the dependent EW parameters are also calculated from the complex masses (WIDTH_SCHEME: CMS), yielding complex values e.g. for the weak mixing angle. To keep the parameters real one can set WIDTH_SCHEME: Fixed. This may spoil gauge invariance though.\n\nWith the following switches it is possible to change the properties of all fundamental particles:\n\nPARTICLE_DATA:\n<id>:\n<Property>: <value>\n# other properties for this particle ...\n# data for other particles\n\n\nHere, <id> is the PDG ID of the particle for which one more properties are to be modified. <Property> can be one of the following:\n\nMass\n\nSets the mass (in GeV) of the particle.\n\nMasses of particles and corresponding anti-particles are always set simultaneously.\n\nFor particles with Yukawa couplings, those are enabled\/disabled consistent with the mass (taking into account the Massive parameter) by default, but that can be modified using the Yukawa parameter. Note that by default the Yukawa couplings are treated as running, cf. YUKAWA_MASSES.\n\nMassive\n\nSpecifies whether the finite mass of the particle is to be considered in matrix-element calculations or not. Can be true or false.\n\nWidth\n\nSets the width (in GeV) of the particle.\n\nActive\n\nEnables\/disables the particle with PDG id <id>. Can be true or false.\n\nStable\n\nSets the particle either stable or unstable according to the following options:\n\n0\n\nParticle and anti-particle are unstable\n\n1\n\nParticle and anti-particle are stable\n\n2\n\nParticle is stable, anti-particle is unstable\n\n3\n\nParticle is unstable, anti-particle is stable\n\nThis option applies to decays of hadrons (cf. Hadron decays) as well as particles produced in the hard scattering (cf. Hard decays). For the latter, alternatively the decays can be specified explicitly in the process setup (see Processes) to avoid the narrow-width approximation.\n\nPriority\n\nAllows to overwrite the default automatic flavour sorting in a process by specifying a priority for the given flavour. This way one can identify certain particles which are part of a container (e.g. massless b-quarks), such that their position can be used reliably in selectors and scale setters.\n\nNote\n\nPARTICLE_DATA can also be used to the properties of hadrons, you can use the same switches (except for Massive), see Hadronization.\n\n### 5.4.1.2. Effective Higgs Couplings\u00b6\n\nThe HEFT describes the effective coupling of gluons and photons to Higgs bosons via a top-quark loop, and a W-boson loop in case of photons. This supplement to the Standard Model can be invoked by configuring MODEL: HEFT.\n\nThe effective coupling of gluons to the Higgs boson, g_ggH, can be calculated either for a finite top-quark mass or in the limit of an infinitely heavy top using the switch FINITE_TOP_MASS: true or FINITE_TOP_MASS: false, respectively. Similarily, the photon-photon-Higgs coupling, g_ppH, can be calculated both for finite top and\/or W masses or in the infinite mass limit using the switches FINITE_TOP_MASS and FINITE_W_MASS. The default choice for both is the infinite mass limit in either case. Note that these switches affect only the calculation of the value of the effective coupling constants. Please refer to the example setup H+jets production in gluon fusion with finite top mass effects for information on how to include finite top quark mass effects on a differential level.\n\nEither one of these couplings can be switched off using the DEACTIVATE_GGH: true and DEACTIVATE_PPH: true switches. Both default to false.\n\n## 5.4.2. UFO Model Interface\u00b6\n\nTo use a model generated by the FeynRules package [CD09],:cite:Christensen2009jx, the model must be made available to Sherpa by running\n\n$<prefix>\/bin\/Sherpa-generate-model <path-to-ufo-model> where <path-to-ufo-model> specifies the location of the directory where the UFO model can be found. UFO support must be enabled using the --enable-ufo option of the configure script, as described in Installation. This requires Python version 2.6 or later and an installation of SCons. The above command generates source code for the UFO model, compiles it, and installs the corresponding library, making it available for event generation. Python, SCons, and the UFO model directory are not required for event generation once the above command has finished. Note that the installation directory for the created library and the paths to Sherpa libraries and headers are predetermined automatically durin the installation of Sherpa. If the Sherpa installation is moved afterwards or if the user does not have the necessary permissions to install the new library in the predetermined location, these paths can be set manually. Please run $ <prefix>\/bin\/Sherpa-generate-model --help\n\n\nfor information on the relevan command line arguments.\n\nAn example configuration file will be written to the working directory while the model is generated with Sherpa-generate-model. This config file shows the syntax for the respective model parameters and can be used as a template. It is also possible to use an external parameter file by specifying the path to the file with the switch UFO_PARAM_CARD in the configuration file or on the command line. Relative and absolute file paths are allowed. This option allows it to use the native UFO parameter cards, as used by MadGraph for example.\n\nNote that the use of the SM PARTICLE_DATA switches Mass, Massive, Width, and Stable is discouraged when using UFO models as the UFO model completely defines all particle properties and their relation to the independent model parameters. These model parameters should be set using the standard UFO parameter syntax as shown in the example run card generated by the Sherpa-generate-model command.\n\nFor parts of the simulation other than the hard process (hadronization, underlying event, running of the SM couplings) Sherpa uses internal default values for the Standard Model fermion masses if they are massless in the UFO model. This is necessary for a meaningful simulation. In the hard process however, the UFO model masses are always respected.\n\nFor an example UFO setup, see Event generation in the MSSM using UFO. For more details on the Sherpa interface to FeynRules please consult [CdAD+11],:cite:Hoeche2014kca.\n\nPlease note that AMEGIC can only be used for the built-in models (SM and HEFT). The use of UFO models is only supported by Comix.","date":"2022-07-04 05:36:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7440295219421387, \"perplexity\": 2064.469986137833}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656104354651.73\/warc\/CC-MAIN-20220704050055-20220704080055-00599.warc.gz\"}"}
null
null
Q: UNIX script to check for Multile files in same location created today can you help me to built a unix script to check for Multiple files in same location created today or not, I tried below code but this is checking for multiple location for single file. enter function WRITE_LOG(){ echo "$(date) : $@" >> ${LOG_FILE} } function CHECK_FILE(){ cd ${1} WRITE_LOG "Checking files in ${1}" ls -l | grep -q "$(date "+%Y-%m-%d").*RIG*" if [ "$?" -eq "0" ] then WRITE_LOG "File created for today" else WRITE_LOG "File not created. please check" fi } WRITE_LOG "Look for abc files" > $LOG_FILE CHECK_FILE "/abc/zyx" CHECK_FILE "/abc/QLD1" CHECK_FILE "/abc/SAa" export MAILTO="abc@xyz.com" export CONTENT="/home/abc/LOG/HC.log" export SUBJECT="check for files Generated Today" ( echo "Subject: $SUBJECT" echo "MIME-Version: 1.0" echo "Content-Type: text/html" echo "Content-Disposition: inline" echo '<HTML><BODY><PRE>' cat $CONTENT echo '</PRE></BODY></HTML>' ) | /usr/sbin/sendmail $MAILTO code here A: function CHECK_FILE(){ cd ${1} WRITE_LOG "Checking files in ${1}" FILE=`find . -type f -name "abc.RIG"` for OUTPUT in $FILE do ls -l $OUTPUT | grep -q "$(date "+%Y-%m-%d").*RIG*" if [ "$?" -eq "0" ] then WRITE_LOG "File created for today" else WRITE_LOG "File not created. please check" fi done }
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,291
ACCEPTED #### According to The Catalogue of Life, 3rd January 2011 #### Published in Ency. Méth. (Zooph. ), 600. #### Original name null ### Remarks null
{ "redpajama_set_name": "RedPajamaGithub" }
3,055
HOT TOPIC: HONG KONG FRIDAY READING LIST A SUMMER IN SWITZERLAND Published on February 3, 2014 February 11, 2014 by prospectjournalucsdLeave a comment By Natasha Azevedo This is the first article in our 2014 Week of Photo Journals: Changing Perspectives. Check back each day this week to see more beautiful photography and travel accounts from UC San Diego students. A Sunny Day on Lac Léman The Swiss Alps frame Lac Léman (commonly known as Lake Geneva) on a bright July afternoon. Trains in the southwest of Switzerland run along the lake, making stops at small lakeside cities such as Montreux in the Canton of Vaud. Known for its charming jazz festivals and scenic walkways, dozens of docks line the French-speaking city, serving as ports for local fishermen and sailors. I captured this photo on a three-hour walk along the lake with friends from Moldova and Ukraine. Sunrise at Rochers de Naye During a summer internship in Switzerland, I hiked a well-known mountain called Rochers de Naye in the Swiss Alps. While tourists normally ride a train to the top of the mountain, my fellow interns and I decided instead to hike for six hours to reach the top before sunrise. With an overly-eager Romanian friend leading us and a single iPhone flashlight to light the way, my four friends from Egypt, Germany, China, and Thailand joined me as we hiked through forests and rocky cliffs in the pitch black. After a few bloody knees and breaks to watch the stars, I captured this photograph at the summit just in time for the sunrise. Fribourg, Switzerland After taking a train to Switzerland's capital I decided to make a pit stop at a small city called Fribourg. This quiet city has dozens of picturesque scenes such as the one captured here. Houses are built into the mountains and charming wooden roofs sit atop shops lining the Sarine river. The city is small enough to walk through on foot and provided a nice contrast to the bustling streets of Bern as the town seemed less privy to tourists. Rather, it exuded authenticity and quietness; a hidden gem along Switzerland's borders. Caux Palace at Sunset This photograph captures the Caux Palace, often referred to as the 'Mountain House.' It is situated near the top of the Alps overlooking Lake Geneva and is said to have inspired the castle in Disney's "Snow White and the Seven Dwarfs." Every summer it hosts a conference for Caux: Initiatives of Change where global leaders, diplomats, NGO coordinators, and international economists gather to discuss global conflict and cooperation. A subsidiary of the United Nations, the European Union-affiliated organization hosts a seasonal internship where 36 interns from across the world participate in global leadership programs. I had an opportunity to live and work as the youngest intern and only American at the Mountain House for the summer and spent evenings trying to capture the beautiful sunsets near the castle. Big Ben at 6:18 This image captures one of London's most iconic landmarks. During one summer in the UK, I captured the clock tower surrounding Big Ben in Westminster. I took this photograph from a boat on the Thames at dusk. The clock tower is one of England's largest tourist destinations. Bern at Bird's Eye After climbing 300 steps up a small tower in Bern's largest cathedral, I was able to capture Switzerland's capital in a new light. A UNESCO World Heritage site, the city was once the workplace of Albert Einstein and is the home of the attractive yet controversial "Bear Parks." Bern is a primarily German-speaking city and has been ranked as one of the world's top cities for a positive quality of life. All images by Natasha Azevedo, Prospect Contributing Writer Categories EUROPE, FEATURED, PHOTO JOURNALS•Tags England, Europe, photo, photography, Switzerland, travel, United Kingdom SWITZERLAND: A COUNTRY OF MULTILINGUALISM Published on July 8, 2013 October 14, 2013 by prospectjournalucsdLeave a comment By Taylor Morrison The small and neutral Switzerland is a historically multilingual nation. Established in 1291 as a "defensive alliance among three cantons," the country now consists of 26 of these small administrative states, all of which have their own distinct identities and cultures. Because the cantons differ from one another, it comes as no surprise that present-day Switzerland is very diverse, comprised of an array of ethnic groups all speaking various languages. Despite this great diversity Switzerland has been able to define and maintain a national identity through its official languages: German, French, Italian and Romansch. These four languages directly reflect the four largest ethnic groups in the country. However, Switzerland's multilingualism extends much further; nearly 20 percent of Switzerland's resident population is foreign and speaks many other languages than the four officially designated by the Swiss government. Given such extensive diversity and the steady rise of English worldwide, how has Switzerland maintained its multilingualism and what is the role of English within it? I argue that Swiss multilingualism is maintained by the cultural interactions among the population and the devolved system of governance in which the regions are able to control policies like those of language and education. This allows for the preservation of distinct cultures and identities, but English threatens Switzerland's existing structure of multilingualism as it grows in popularity worldwide. Switzerland is a federation made up by different localities known as cantons. As a result of federalism, certain powers are assigned to the federal government while others are assigned to the localities. For the most part, policymaking power is devolved to the cantons. Such is the case with language and education policies. Because these powers have been delegated to local officials, practices can vary from region to region so long as they are in compliance with federal laws.[1] What stems from this devolution is the principle of "territoriality." According to Article 70 of the Swiss constitution: "The Cantons shall designate their official languages. In order to preserve harmony between linguistic communities, they shall respect the traditional territorial distribution of languages, and take into account indigenous linguistic minorities."[2] Additionally, it states that "The Confederation and the Cantons shall encourage understanding and exchange between the linguistic communities."[3] Thus, the devolution of policymaking power to the local level is used not only to preserve the cultural diversity of the nation, but also to govern and facilitate interactions among the people of Switzerland. What makes Swiss multilingualism unique is that it lacks a common trait found in most multilingual societies: language mixing or code switching. Jesse Levitt points out that this is because each language has "defined boundaries." Each official language has its own role. For example, "All federal laws are published in German, French and Italian," the Federal Assembly uses French and the "Romansch normally use German" in formal situations.[4] This principle of territoriality allows for only one official language per canton, often the local language as most cantons' official languages fall along language boundaries, although there are a few bilingual and even trilingual cantons.[5] Education policy is similarly guided by the principle of linguistic territoriality in that the language of instruction is determined by the cantons. Despite a canton's decision to use one language in its schools, it maintains multilingualism in the region by offering courses on Switzerland's other official languages.[6] Daniel Stotz makes it clear that in Switzerland, "public education is entrusted with the objective of good multilingual citizenship."[7] As added support for maintaining multilingualism, Switzerland places a strong emphasis on school instruction in its citizens' native languages. This gradual immersion makes it easier for them to process information in the language to which they are accustomed and provides a strong foundation for learning other languages.[8] The Swiss approach of viewing knowledge of languages as an asset and not a deficit has maintained multilingualism in spite of "defined borders" and linguistic territoriality. Cross-cultural interactions also maintain multilingualism in Switzerland. Not all cantons are officially monolingual, and even those that are, are exposed to other languages through migration. People are free to move and are not limited by language. This means that in some cases, a person moves to a canton where he or she may not necessarily speak the official language. Jesse Levitt describes this, citing a trend of Swiss citizens moving from German-speaking areas to French-speaking areas, making it necessary for the German migrants to learn French as a second or third language.[9] Multilingualism is not only maintained by canton to canton movement but also by migration from other countries. Foreigners make up about 20 percent of Switzerland's resident population and statistics show that of these residents, nine percent use a language other than one of the four official Swiss languages.[10] While this means that some students are not being educated in their mother tongue (especially if the mother tongue is not one of the four official languages), some praise Switzerland for providing immigrants with the same access to the dominant languages in school, in the belief that a common education promotes social cohesion.[11] English poses a new threat to Switzerland's long-standing multilingualism. It lacks a historical hold on the nation, but is slowly becoming the "lingua franca" for universal communication.[12] It is estimated that "there are three times as many non-native English users as native" in the world today. Despite the large number of non-native English speakers, a study by the British Council has found that "by 2015, 2 billion people in the world will be studying English."[13] In Switzerland, "English is widely used in academia, administration and the big corporations" and there is growing support for the country to adopt English as the fifth official language.[14] The Swiss National Science Foundation is a major proponent of this idea, stating that "knowing English would help public administration communicate with all citizens," those living abroad and at home.[15] As more Swiss citizens move among cantons and more foreigners move into Switzerland, it is clear that communication in one of the four official languages may be difficult and English could be a unifying force for the country. However, it does not have a long history of use nor does it represent Switzerland's traditions. It threatens to shift the balance in favor of monolingualism as opposed to upholding the multilingualism that has existed in the nation for hundreds of years. The growing presence of English is evident within education. As previously stated, the cantons choose schools' language of instruction and establish the other national languages as secondary language subjects. English is not one of the four official languages of Switzerland, yet like German and French (the two most commonly used official languages) it is offered as an option for study.[16] In fact, English is more popular. A student survey in Zurich showed that "out of 3,966 German-speaking seventh- and eighth-graders who had learned French as a second foreign language and English as a third foreign language, a majority are more interested in learning English than French; they…would prefer English over French, if given a choice between the two."[17] As of now, the influence of English is contained mostly in schools, as major broadcast media outlets such as the Swiss Broadcasting Corporation continue to broadcast only in German, French and Italian. However, public opinion also seems to indicate a shift toward English. Grin and Korth found that Swiss public opinion, "while overwhelmingly in favor of developing access to English for all children in the education system, is torn over the position that national languages should have in the curriculum: should it be given more or less importance than English, or should they be on par?"[18] These findings show a society leaning towards English, foreshadowing a future where the use of English could surpass that of Switzerland's official languages. Switzerland is a historically multilingual nation with a form of government that for the most part, maintains a great degree of cultural diversity. However, globalization has led to the rise of English, which is now influencing the country's language and education policies. The challenge that lies ahead for Switzerland is integrating English in a manner that does not undermine its long-standing multilingualism but instead, enhances it. Image by Eric Andresen 1. Grin, Francois and Britta Korth. "On the Reciprocal Influence of Language Politics and Language Education: The Case of English in Switzerland." Language Policy 4, no. 1 (2005): 69. 2. Stotz, Daniel. "Breaching the Peace: Struggles Around Multilingualism in Switzerland." Language Policy 5, no. 3 (2006): 251. 4. Levitt, Jesse. "Multilingualism in Switzerland, Belgium and Luxembourg." Geolinguistics 30 (2004): 86. 6. Grin, Francois and Irene Schwob. "Bilingual Education and Linguistic Governance: The Swiss Experience." Intercultural Education 13, no. 4 (2002): 413. 8. Rose, Sharon. "Mother Tongue Education." International Studies 101. University of California, San Diego. La Jolla. 16 Nov. 2010. Lecture. 10. Grin, Francois and Britta Korth. "On the Reciprocal Influence of Language Politics and Language Education: The Case of English in Switzerland." Language Policy 4, no. 1 (2005): 70. 11. Rose, Sharon. "Mother Tongue Education." International Studies 101. University of California, San Diego. La Jolla. 16 Nov. 2010. Lecture. 12. Rose, Sharon. "Globalization – is English Really Winning?" International Studies 101. University of California, San Diego. La Jolla. 21 Oct. 2010. Lecture. 14. Davidson, Keith. "Language and Identity in Switzerland: A Proposal for Federal Status for English as a Swiss Language." English Today 26, no. 1 (2010): 15. 15. Ibid. 16 16. Grin, Francois and Irene Schwob. "Bilingual Education and Linguistic Governance: The Swiss Experience." Intercultural Education 13, no. 4 (2002): 413. Categories ACADEMIC•Tags English, Europe, Language, Multilingualism, Switzerland ACADEMIC AFRICA ASIA-PACIFIC BLOG CULTURE ECONOMICS EDUCATION ENVIRONMENT EUROPE EVENTS FEATURED FRIDAY READING LIST HEALTH HISTORY HUMAN RIGHTS INTERVIEWS LATIN AMERICA MEDIA MIDDLE EAST MILITARY NORTH AMERICA OPINION PASSPORT PHOTO JOURNALS PHOTO OF THE WEEK POLITICS PUBLIC HEALTH SCIENCE AND TECHNOLOGY SOUTHEAST ASIA women PROSPECT on Instagram Seeking opportunities to grow your leadership skills? Want to see what it's like to run a publication? Consider applying to be the new Director of Marketing of PROSPECT Journal for the Winter and Spring quarters! If you have any concerns/questions/doubts on the position, feel free to dm us 😊 Happy Holidays! As Hong Kong is in the midst of political mayhem, staff writer Rachel Chiang spotlights the various forces driving the protests, and spotlights the human rights abuses that are occurring as a result. The 1947 partition of India and Pakistan was anything but a clean divide. Based largely along religious lines, the move was meant to place the large population of muslims into Pakistan. Muslims in India have had a well documented history of discrimination, but are the conditions for Pakistani Hindus any different? Thank you to everyone who came to our Fall Global Forum on Climate Refugees this past Wednesday night! We hope that everyone learned something new and interesting that night. Special thanks to @ihouseucsd and @nlgucsd for co-hosting the event- we couldn't have done it without you. Photo credit 📷: @just_another_ethan Today is THE day of our Climate Refugees event! Get your tickets now at bit.ly/ClimateRefuGF! Hope to see you there. Our third speaker for the night is Milton Saier! Saier was a postdoctoral fellow at Johns Hopkins University who received his Ph.D. from UC Berkeley, and is now a professor of molecular biology at UCSD. His laboratory takes a multidisciplinary approach to science, using biochemical, molecular genetic, physiological, and computational approaches. Currently, his laboratory maintains the IUBMB-approved Transporter Classification Database (TCDB) which classifies transport systems found in all living organisms on Earth into five categories. Marena Lin will be discussing climate change adaptation through labor and immigration rights of pacific islanders at our Global Forum this Wednesday. If this is something that interests you, be sure to come to our event and RSVP tickets at (bit.ly/ClimateRefuGF)! See you there! Enter your email address to follow PROSPECT and receive notifications of new posts by email. Was Disney movie… on THE SAMI: A DISAPPEARING INDIG… matrix on Corporate Accountability: The… Small Concrete Keybo… on Corporate Accountability: The… Forint on Corporate Accountability: The… Multi-lateral on Corporate Accountability: The… Thank You To Our Sponsers UCSD International House The PROSPECT Fund
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
388
Monthly Notices of the Royal Astronomical Society (MNRAS) — рецензований науковий журнал, присвячений дослідженням у галузі астрономії та астрофізики. Видається неперервно з 1827 року і публікує листи та статті про оригінальні дослідження. Попри свою назву, журнал уже не є щомісячним, також він більше не містить повідомлень Королівського астрономічного товариства. 2020 року, з початком пандемії COVID-2019, публікацію журналу на папері було припинено, відтоді він видається лише в електронному вигляді. Цей журнал займає 14-е місце (в 2022) році в рейтингу SCImago Journal Rank в області астрономії та астрофізики, що означає, що він належить до числа найбільш авторитетних в свої області. В 2022 році імпакт-фактор журналу дорівнював 5.287. Примітки Астрономічні журнали Англомовні наукові журнали Астрономія у Великій Британії
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,593
We Spoke to Nick Mathews of MainVest on How to Rebuild in the Post COVID Economy As part of my series about the "How Business Leaders Plan To Rebuild In The Post COVID Economy," I had the pleasure of interviewing Nick Mathews, CEO of MainVest. MainVest is on a mission to bring investment capital directly to Main Street, so as to keep our local businesses alive and growing during and after this pandemic crisis. His business has helped small brick and mortar companies since 2018, but their mission to save main street is crucial now, more than ever, to us rebuilding post-Covid-19. He believes that small businesses are the heartbeat of local life. They supply 99% of our jobs, and give us culture, community, and the products and services we value. Through Covid-19, MainVest has offered zero interest rate immediate loans to qualifying small businesses and their platform gives locals direct investment opportunities to support their local businesses; everything from funeral homes to juice bars. Happy to! I've pretty much been a career entrepreneur since college, when a few friends and I started an ill-fated event aggregator app. After graduating, I was focused on growing our userbase and substitute teaching at the Lynn Public School systems. It was around this time that I was introduced to Uber, back when it was a ~25-person company that had just started expanding out of San Francisco. I saw that they were launching Boston and immediately reached out. About a week later, I was brought on to focus on the demand side of the marketplace. The next seven years were spent growing with Uber from 30 to 20,000 employees, expanding to the rest of New England, then moving down to DC, which was the hub for Uber's US and Canada operations to run East Coast business development. It was around that time that the Jobs Act, Title III, the regulatory framework upon which MainVest is powered, was being passed through Congress. While the regulations were mainly designed to facilitate equity investments into early start-ups (The Next Uber, The Next AirBnB, etc.), I saw a path to retrofit them to work for small businesses, fueling local economic growth through community investment. Fast forward two years of winding down at Uber, convincing an old friend to leave a very secure position in finance to cofound an investment marketplace, and a LOT of reading and edification around financial markets, economic development, and securities regulation, and we built and launched MainVest in 2018. Learning first — know your customer — I grew up in Massachusetts, so there's really no excuse for this one, but I will never forget the blowback we got in a marketing email to our Boston rider base for Saint Patrick's Day at Uber. We erroneously used the colloquial "Saint Patty's Day" (the correct, culturally appropriate spelling is "Saint Paddy's Day") and to this day it was the highest engagement email I think I've ever had the embarrassing ownership of sending, and not for good reasons. While there was no malintent, it really reinforced the importance of understanding cultural nuances and how your voice can be interpreted, which led us to become much more cognizant of our responsibility when communicating with our customers. The Third Wave (Steve Case) — Some of the biggest changes and improvements that technology can bring to our livelihood come with unique challenges that weren't necessarily present for the Web 2.0 era. Working in an environment of complex regulatory frameworks while changing behavior to unlock the next phase of economic development and community growth isn't something that happens overnight. For us specifically, we're uniting a previously hyper-fragmented asset class (brick and mortar small businesses) while opening up that access to the 90%+ of the U.S. population for the first time. The long-term benefit for this paradigm shift is the democratization of community development, creating new opportunities for everyday Americans to generate and build wealth while bringing communities closer together with aligned incentives and goals. The Third Wave does a great job of shifting paradigms when thinking about how to drive this change. Begin to cross the chasm, let technology do what it's supposed to do, and create a more equitable future that raises the standard of living across the board. Extensive research suggests that "purpose driven businesses" are more successful in many areas. When you started your company what was your vision, your purpose? Happily, I can say that the purpose and vision for MainVest has only been strengthened since the day we started thinking about it. Our mission has been to change the way people think about economic development and wealth generation by empowering communities to invest in themselves. Silicon Valley doesn't have a trademark on entrepreneurship and, in fact, the vast majority of builders across the states are the chefs, brewers, yoga teachers, escape room designers, bike shops, etc. that represent passionate people bringing value and identity to Main Street USA. These local businesses create jobs and fuel a local supply chain of agriculture and manufacturing that is integral to the health and wellbeing of our local economies. You don't realize that when you stare at the rise or dip of the Dow Jones, but, for the vast majority of Americans, local economies have a far greater impact on our lives than Apple's stock price. Our purpose is to give communities the resources and tools to take action and control of how they develop and grow. Perseverance. Something that becomes hyper-important during times of unprecedented uncertainty, which I would say has been an understatement month by month as 2020 has progressed. When you have news cycles and public sentiment changing constantly, you have to remember to step back and look objectively at what is going on. It's always the hard questions that need to be asked. Does what we're doing still make sense? Are we still providing value? What has changed and what stays the same? One challenge was around balancing work needs and being supportive at home. My partner had just left her job and accepted a new role in February, which, in light of everything going on, was then put on indefinite hold. Trying to be supportive at home, while also working from home during the stay-at-home order creates an even more intense merger of work-life balance challenges that definitely spawned some intense conversations. Being able to have open conversations and build better communication over that 3-month period ended up being an incredible strengthener of our relationship, but didn't happen overnight. On a lighter note, the true challenge has been seeing the looks on our dogs' faces now that I'm back in the office after 3 months of working from home. They definitely expected us being home 24–7 to be part of the "new normal" and heading out every morning, they react like we're going off to war. Can you share a few of the biggest work-related challenges you are facing during this pandemic? Can you share what you've done to address those challenges? We're a very close-knit team, so transitioning to working from home was challenging. We didn't realize just how many conversations we were having every day that brought value to what we're doing and built up our culture. Not seeing each other for weeks at a time put a bit of a strain on our culture and collaboration, so to combat this, we tried to implement a daily coffee/breakfast time before our morning all-hands, and weekly or bi-weekly happy hours to make sure we shared time to not talk about work, or to brainstorm ideas outside of our day-to-day work. Since Massachusetts has had a successful and safe re-opening (in my opinion), getting to see more of the team in both an office and social setting has been hugely beneficial in rebuilding our culture and increasing collaboration. Another challenge was bandwidth, though this falls into the category of a problem that's good to have. At the start of COVID, we rolled out the Main Street Initiative- our loan program for businesses affected by COVID-19 closures- as well as some new campaigns like Still Open, a platform to connect communities with restaurants still doing takeout and delivery, and Rebuild Main Street, a marketing campaign focused on the experiences of businesses and investors. Our inbound funnel of businesses interested in raising capital increased exponentially while our team stayed the same, so we all had to work together to make sure that all businesses and inquiries were taken care of. Uncertainty breeds anxiety and fear, and the best counter to uncertainty (in lieu of certainty) is optimism — One of the biggest failures I've felt was the seeming need to politicize a public health crisis. The most productive conversations I've had with friends, family, and peers all start with a flipping of the paradigm. We are certain that we are in the middle of one of the largest global events many of us have ever faced. We know that there are going to be challenges and opportunities ahead and that there are things we can't control and things we can. We're living in the most polarized America, in my memory at least, and the moment the pandemic became a political talking point, objectivism and rationality started to wane. Of my group of friends, family, and loved ones, I'm very lucky to have a large diversity of thought and opinion across the board. No one has a crystal ball, but taking a proactive approach to identifying the levers that we can pull to accelerate the rebuilding process leads to a lot of optimistic conversations. One of the biggest silver linings that we've gotten from the pandemic is the renewed appreciation for local businesses and the important role they play both culturally and economically in our local economies. Moving out of post-Covid and into rebuilding from a recession, there's going to be a unique opportunity both for investors and entrepreneurs in the small business space. Institutional lending, much like post-2008, will have a much larger risk profile, making it more challenging for capable entrepreneurs to get the capital needed to start and grow ventures. Inversely, with the Fed dropping interest rates to record lows, investors are going to be looking for new and innovative ways to find yield. This gives individuals a way to diversify their investments directly into their local communities so that both can act as a way to drive increased wealth generation as well as accelerate the rebuilding of our Main Street economies, bringing back jobs and strengthening local community ecosystems. I'm definitely not an expert so I won't speak to the areas of public health that may change forever, but my top prediction is a return to localization. We've been taught to think about things on a macro level- for example, a rise in GDP being the end all of economic prosperity, or the public markets determining wealth. I think that the pandemic has taught us about what really matters, and that's community. On a personal level, there's this sense of "you don't know what you've got 'til it's gone"- we don't realize just how much we rely on local tap rooms and coffee shops that make up the fabric of our communities until they're in jeopardy. I think that people will support these local businesses much more intentionally after experiencing what it's like to not have them. On an economic level, it's become clear that GDP and equities markets actually have very little impact on most people's everyday lives. Instead, it's the strength of local economies- local job creation, the success of locally-owned businesses, opportunities for upward mobility- that really matter for most people. I predict that much more attention will be paid to how we operate at a local level, and that everyday people will be much more critical of these macro level "trends" and the effect they really have on us. Based on the core function and value of what our business was built to do, our mission hasn't changed, just reaffirmed. Our growth and success as an organization directly impacts the speed and acceleration of the rebuilding of our domestic economy. Personally, we plan to become a direct lever for that accelerated economic return and growth beyond the baseline. We do that by engaging with fantastic entrepreneurs and helping them get the resources they need to build, and by continuing to build our team with talented, mission-driven people aligned with the mission. Without standing on a soapbox and waving a flag, I would encourage the realization that people can have way more of a direct impact on rebuilding their local economies than they realize. In a world where so much feels out of our control, it's important to remember that so much of what impacts our daily wellbeing take place on a local level, from the restaurants, salons, gyms, and retail that create jobs and provide revenues that support education and public infrastructure, to the social interactions with our friends and neighbors. "All the pieces matter." ~ Lester Friedman, The Wire The amount of positive impact we can have is often directly correlated to the complexity of the problems and challenges we're looking to solve. When tackling complex issues, whether social, personal, or business related, it's easy to get tunnel-visioned at times and lose sight of the overall picture. I've always credited the Wire for really helping edify my thoughts and understanding of all of the different variables and inputs that work together to create both challenges and opportunities, against the backdrop of a great American city. The best way to follow MainVest is by signing up so readers can see new investment opportunities around the country and learn more about these entrepreneurs, but we're also pretty active on social media: Instagram (@MainVestInc), Facebook (@mainvest), and Twitter (@themainvest).
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,802
Koroška () ist eine statistische Region in Slowenien auf NUTS-3-Ebene. Die Region, die im Mai 2005 für statistische Zwecke eingerichtet wurde und der keine administrative Aufgaben zukommen, umfasst 1.041 km² mit 70.835 Einwohnern (2020). Die Einwohnerzahl ist rückläufig (2008: 72.837; 2012: 72.267; 2016: 71.010). Diese statistische Region deckt sich nur zum Teil mit slowenischen Teilen des ehemaligen Herzogtums Kärnten (Slovenska Koroška) und umfasst auch Gebiete, die einst zur Untersteiermark gerechnet wurden, wohingegen das Jezersko der Statistikregion Gorenjska angegliedert wurde. Die folgenden sieben Gemeinden der historischen slowenischen Region Štajerska (Untersteiermark) wurden der Statistikregion Koroška angegliedert: Slovenj Gradec, Radlje ob Dravi, Muta, Mislinja, Vuzenica, Podvelka, Ribnica na Pohorju und einige Teile von Dravograd. Aus der historischen Region Koroška verblieben in der Statistikregion Koroška: Ravne na Koroškem als Hauptort der vergrößerten Region, Prevalje, Mežica und Črna na Koroškem und einige Teile von Dravograd. Nachbarregionen sind Savinjska und Podravska, im Übrigen grenzt die Region an die österreichischen Bundesländer Steiermark und Kärnten. Diese Region ist Teil der Euregio Steiermark–Slowenien als Interessensvertretung der Grenzregion, die die gemeinsamen abbauen, aufbauen, und durch das historische Konzepte mit den Anforderungen eines modernen Europa der Regionen zu verbinden versucht. Einzelnachweise Weblinks physische Karte der Statistischen Region Koroška, koroska.si politische Karte der Statistischen Region Koroška, Flags of the World Sloveniaholidays: Stat.Region Koroška mit Ortsbeschreibungen (deutsch, Maschinenübersetzung) I feel Slovenia: Koroška, slovenia.info, Slovenian Tourist Board (Landschaft und Städte, engl., slow.) Portrait of the Regions - Slovenia Koroska, Eurostat (Statistikregion, verschiedene Aspekte, englisch) Koroška, Invest Slovenia (Statistikregion Wirtschaftsdaten, engl.) Statistische Region in Slowenien NUTS-3-Region
{ "redpajama_set_name": "RedPajamaWikipedia" }
568
Q: "had a fall" vs "fell" I wrote, today, that I "had a fall". A friend asked "at what age does falling over become 'had a fall'?" I have to admit that it does seem to be an age thing, but I would like to know if there is some concensus or definition around this? What would you see as the difference in meaning and appropriate use for 'Yesterday I had a fall, and... '. 'Yesterday I fell over, and ...'.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,942
Django Dash 2013 (OpenSourceSag) ============ This application allows you to create and manage your project in a SCRUM way. Installation ------------ 1. To set up the environment, you need to run the command: python manage.py install your_username your_email With these two parameters ('your_username' and 'your_email') a manager account will be created (you'll be ask for a password during the process). 2. You can launch the server with this command: python manage.py runserver 127.0.0.1 --setting agile_board.settings.local 3. You can now login to our interface by going to the address: 127.0.0.1 How to use it ? ---------------- 1. Log in to our interface 2. You are on the Project list page. You can add a new project with the '+' 3. You are now on your new project page. The first field is the project name and the second one is it description. You can add stories easily with the field '___________' bellow 'Stories'. To add or update a story or a task, press Enter or Tab You can add tasks to a story by selecting a story first. The task will be associated with the selected story. To add a sprint, just click on the '+' next to it. To add task to a sprint, just drag and drop a task by clicking on the '*' next to it. Just click on a sprint to show the associated tasks. 4. You can now return to the main page with the link 'Back to project list' and you can see your project. The whiteboard button will redirect you to your project page. If you created a sprint, you can click on the sprint button, you'll be redirected to the sprint page. Here you can see all the sprint's task and easily change their status by drag and drop them. Team members ------------ - Alexandre Lessard - Franck Coiffier - Frédérique Boulay
{ "redpajama_set_name": "RedPajamaGithub" }
4,698
Allison has recently joined the Wardle Co Team with several years of Banking experience behind her. Alli is a Port Pirie local and is excited to start her Property Management journey in a town she loves. 3 Bedroom Home - Good Location! ** TO REQUEST AN INSPECTION FOR THIS PROPERTY CLICK ON THE INSPECTION BUTTON OR THE EMAIL AGENT BUTTON ** This 3 bedroom home features a good sized lounge room, dining and rumpus or study area. The kitchen has electric stove and a walk in pantry. 2 bedrooms with BIR. Good sized yard offering ample space to play and enjoy the sun. Undercover outdoor area, undercover parking, ducted evap air con.Close to schools, deli & post office, take away and golf course. Available from 26/11/18.
{ "redpajama_set_name": "RedPajamaC4" }
7,781
Q: Add Dropdown in JQgrid dynamically I want to add drop down in JQGrid dynamically. For example:- I have below type of grid. Now when I click on a button a new row should be added in the grid. And for new row the first column data will be dropdown, second Hyperlink, third dropdown and forth checkbox. i.e. It should be same as the first row. And for every button click new row should be added similar to first row. A: For attribute of type formatter='select' and type='select', jQgrid internally maintains a list of key-value pairs. So while inserting the new row, you need to provide "ID" as the value of drop down box. For Example : For inserting a new row : $("#listData").jqGrid('addRowData',index,{kpiParameter:1,product:'XYZ',metric:'1',perkSharing:'XYZ'}); Here, '1' is the ID of KpiParameter. For this solution to work you need to load whole list of key-value pair of the drop down while defining the jQgrid. You can write jqGrid as below : jQuery('#kpisetup').jqGrid({ autowidth: true, autoheight: true, url : '', mtype : 'POST', colNames : [ 'KPI ID','KPI Parameter', 'Product','Metric','Perk Sharing'], colModel : [ {name : 'kpi_id',index : 'kpi_id',autowidth: true,hidden:true,align:'center'}, {name : 'kpi_parameter',index : 'kpi_parameter',width:200, sortable:true, align:'center', editable:true, cellEdit:true, edittype: 'select', formatter: 'select', editrules: { required: true}, editoptions:{value: getKPIParameters()//LOAD ALL THE KPI PARAMETER KEY-VALUE PAIR} }, {name : 'product',index : 'product',autowidth: true,formatter:'showlink',formatoptions:{baseLinkUrl:'#'},align:'center'}, {name : 'metric',index : 'metric',width:75, editable:true, edittype: "select", align:'center', formatter: 'select', editrules: { required: true}, editoptions: {value: '1:select' //LOAD ALL THE METRIC VALUEs} }, {name : 'perksharing',align:'left',index : 'perksharing',autowidth: true,editable:true,edittype: "checkbox",align:'center'} ], rowNum : 10, sortname : 'kpi_parameter', viewrecords : true, gridview:true, pager : '#kpisetup_pager', sortorder : 'desc', caption : 'KPI Setup', datatype : 'json' }); Hope this will work for you. Thanks, Gunjan.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,974
\section{Introduction} The $K\to\pi\nu\bar{\nu}$ decays are flavor-changing neutral current (FCNC) processes that probe the $s\to d\nu\bar{\nu}$ transition via the $Z$-penguin and box diagrams shown in Figure~\ref{fig:fcnc}. They are highly GIM suppressed and their Standard Model (SM) rates are very small. \begin{figure}[ht] \centering \includegraphics[width=80mm]{dpf13_proc_fcnc.eps} \caption{Diagrams contributing to the process $K\to\pi\nu\bar{\nu}$.} \label{fig:fcnc} \end{figure} For several reasons, the SM calculation for their branching ratios (BRs) is particularly clean (see \cite{Cirigliano:2011ny} for a recent review): \begin{itemize} \item The loop amplitudes are dominated by the top-quark contributions. The neutral decay violates $CP$; its amplitude involves the top-quark contribution only. Small corrections to the amplitudes from the lighter quarks come into play for the charged channel. \item The hadronic matrix element for these decays can be obtained from the precise experimental measurement of the $K_{e3}$ rate. \item There are no long-distance contributions from processes with intermediate photons. \end{itemize} In the SM, ${\rm BR}(K_L\to\pi^0\nu\bar{\nu}) = 2.43(0.39)(0.06)\times10^{-11}$ and ${\rm BR}(K^+\to\pi^+\nu\bar{\nu}) = 7.81(0.75)(0.29)\times10^{-11}$ \cite{Brod:2010hi}. The uncertainties listed first derive from the input parameters. The smaller uncertainties listed second demonstrate the size of the intrinsic theoretical uncertainties. Because of the corrections from lighter-quark contributions, these are slightly larger for the charged channel. Because the SM rates are small and predicted very precisely, the BRs for these decays are sensitive probes for new physics. In evaluating the rates for the different FCNC kaon decays, the different terms of the operator product expansion are differently sensitive to modifications from a given new-physics scenario. If ${\rm BR}(K_L\to\pi^0\nu\bar{\nu})$ and ${\rm BR}(K^+\to\pi^+\nu\bar{\nu})$ are ultimately both measured, and one or both BRs is found to differ from its SM value, it may be possible to characterize the physical mechanism responsible \cite{Straub:2010ih}, e.g., a mechanism with minimal flavor violation \cite{Hurth:2008jc}, manifestations of supersymmetry \cite{Isidori:2006qy}, a fourth generation of fermions \cite{Buras:2010cp}, Higgs compositeness as in the littlest Higgs model \cite{Blanke:2009am}, or an extra-dimensional mechanism such as in the Randall-Sundrum model \cite{Blanke:2008yr}. The decay ${\rm BR}(K_L\to\pi^0\nu\bar{\nu})$ has never been measured (the KOTO experiment at J-PARC \cite{Yamanaka:2012yma} has a good chance of observing it). ${\rm BR}(K^+\to\pi^+\nu\bar{\nu})$ was measured by Brookhaven experiment E787 and its successor, E949. The combined result from the two generations of the experiment, obtained with seven candidate events, is ${\rm BR}(K^+\to\pi^+\nu\bar{\nu}) = 1.73^{+1.15}_{-1.05}\times10^{-10}$ \cite{Artamonov:2009sz}. The purpose of the NA62 experiment at the CERN SPS is to measure ${\rm BR}(K^+\to\pi^+\nu\bar{\nu})$ with a precision of about 10\% in two-years' worth of data taking. Observation of $\sim$100 signal events will require a sample of $10^{13}$ $K^+$ decays within the geometrical acceptance of the experiment, for which the signal detection efficiency must be at least 10\%. Then, for a measurement with 10\% precision, the background level must be kept down to no more than about 10\% of signal. This implies an overall background rejection factor of $10^{12}$. The residual background level must also be determined to within about 10\%. \section{The NA62 experiment} The experimental signature is a $K^+$ coming into the experiment and decaying to a $\pi^+$, with no other particles present. The first line of defense against abundant decays such as $K\to\mu\nu$ and $K\to\pi\pi^0$ (together representing about 84\% of the total $K^+$ width) is to precisely reconstruct the missing mass of the primary and secondary tracks and reject events with $M_{\rm miss}^2 \approx 0$ or $M_{\rm miss}^2 \approx m_{\pi^0}^2$, assuming the secondary is a $\mu^+$ or a $\pi^+$, respectively. However, the rejection power from kinematics alone is at best $10^{4}$, and in any case, about 8\% of $K^+$ decays (e.g., $K_{e3}$, $K_{\mu3}$) do not have closed kinematics. The remainder of the experiment's rejection power must come from redundant particle identification systems and hermetic, highly-efficient photon veto detectors. The NA62 apparatus \cite{NA62:2010xxx}, schematically illustrated in Figure~\ref{fig:na62}, was designed around these principles, which we now consider in turn. \begin{figure}[ht] \centering \includegraphics[angle=270,width=0.80\textwidth]{dpf13_proc_na62.eps} \caption{Schematic diagram of the NA62 experiment.} \label{fig:na62} \end{figure} \paragraph{Beamline and decay volume} The experiment makes use of a 400-GeV primary proton beam from the SPS with $3\times10^{12}$ protons per pulse and a duty factor of about 0.3. This is collided on a beryllium target at zero angle to produce the 75-GeV $\pm1\%$ unseparated positive secondary beam used by the experiment. This beam consists of about 525~MHz of $\pi^+$, 170~MHz of $p$, and 45~MHz of $K^+$, for a total rate of 750~MHz. The beamline opens into the vacuum tank about 100~m downstream of the target. The vacuum tank is about 110~m long and fully encloses the four tracking stations of the magnetic spectrometer; the pressure inside is kept at a level of $10^{-6}$ mbar. The fiducial volume occupies the first 60~m of the vacuum tank (upstream of the spectrometer). About 10\% of the $K^+$'s entering the experiment decay in the fiducial volume, corresponding to 4.5~MHz of $K^+$ decays. \paragraph{High-rate, precision tracking} In order to obtain the full kinematic rejection factor of $10^{4}$ for two-body decays, both the beam particle and the decay secondary must be accurately tracked. The beam spectrometer \cite{Fiorini:2013xya} consists of three hybrid silicon pixel tracking detectors installed in an achromat in the beam line. Each detector consists of a 200-$\mu$m-thick monolithic sensor and 10 bump-bonded, 100-$\mu$m-thick readout ASICs. The pixel size is $300\times300~\mu$m$^2$, giving a momentum resolution $\sigma_p/p \sim 0.2\%$ and an angular resolution $\sigma_\theta = 16$~$\mu$rad. This beam-tracking system is referred to as the Gigatracker, because it will track the individual particles in the 750-MHz secondary beam. The magnetic spectrometer for the secondary particles consists of four straw chambers operated inside the vacuum tank \cite{Danielson:2010fta}. Each chamber has 16 layers of straw tubes arranged in 4 views. The straws are made from metalized, 36-$\mu$m-thick mylar ultrasonically welded along the seam. They are just under 10~mm in diameter and are 2.1~m long. With a 70\% Ar--30\% ${\rm CO_2}$ gas mixture, the point resolution on a single view is $\sigma_x \leq 130~\mu$m. Considering that each chamber is only $0.45\,X_0$ thick, with the spectometer magnet providing a $p_\perp$ kick of 270~MeV, the momentum resolution for tracks is $\sigma_p/p = 0.32\% \oplus 0.008\%\,p$. \paragraph{Redundant particle identification} The principal PID challenge for single tracks is to reject $K\to\mu\nu$ decays with an inefficiency of less than $10^{-7}$ after the application of kinematic cuts. The bulk of NA62's $\pi/\mu$ separation capability is provided by the downstream muon vetoes (MUV). There are three MUV systems. MUVs 1 and 2 are iron/scintillator hadron calorimeters. These are used mainly for offline $\mu$ identification and provide a rejection factor of $10^5$. MUV 3 is highly segmented and provides fast $\mu$ identification for triggering. It can veto $\mu$'s online at a rate of 10~MHz with a time resolution $\sigma_t < 1$~ns. An additional two orders of magnitude in $\pi/\mu$ separation are provided by a large (3.7-m-diameter by 18-m-long) ring-imaging Cerenkov counter (RICH) \cite{Bucci:2010zz} filled with neon gas at 1~atm ($p_{\rm thresh}$ = 12~GeV for $\pi$). In addition to providing good $\pi/\mu$ discrimination over the entire fiducial momentum interval ($15 < p < 35$~GeV), the RICH measures the $\pi$ crossing time with a resolution of $100$~ps and contributes to the level-0 trigger. \paragraph{Beam timing and PID} Considering that the rates of primary and secondary tracks in the experiment are respectively about 750~MHz and 10~MHz, accurately matching the correct secondary track to the correct primary is a basic challenge for the experiment. Due to the effectively incorrect reconstruction of the primary, for mismatched events the missing mass resolution is worsened by a factor of three. Precise timing of the secondary can be obtained from the RICH ($\sigma_t \sim 100$~ps), while for the primary, the Gigatracker provides $\sigma_t \sim 150$~ps. Cerenkov identification of the kaons in the beam both provides a precise, redundant measurement of the beam particle's timing and reduces the effective beam rate from 750~MHz to 45~MHz, hence reducing the mismatch probability. Such identification is provided by the CEDAR/KTAG, a differential Cerenkov counter based on the CERN CEDAR-W design \cite{Romano:2011xxx}. One of the CEDAR-W detectors has been refurbished to run with ${\rm H}_2$ at 3.85~bar and outfitted with a new, high-segmentation readout (KTAG). The beam identification from the CEDAR/KTAG is fundamental to the suppression of background from beam-gas interactions---without it, the vacuum in the decay tank would have to be kept at the level of $10^{-8}$~mbar. With the help from the CEDAR/KTAG, the probability of mismatching the primary and secondary tracks is held below 1\%. Nevertheless, events with mismatched tracks still account for half of the events not rejected by kinematics. \paragraph{Hermetic photon vetoes} Rejection of photons from $\pi^0$'s is important for the elimination of many background channels. The most demanding task is the rejection of $K^+\to\pi^+\pi^0$ decays. For these decays, requiring the secondary $\pi^+$ to have $p < 35$~GeV guarantees that the two photons from the $\pi^0$ have a total energy of 40~GeV. If the missing-mass cuts provide a rejection power of $10^4$, the probability for the photon vetoes to miss both photons must be less than $10^{-8}$. The photon veto system consists of four separate subdetector systems. The ring-shaped large-angle photon vetoes (LAVs) are placed at 12 stations along the vacuum volume and provide coverage for decay photons with $8.5~{\rm mrad}<\theta<50~{\rm mrad}$. Downstream of the RICH, the NA48 liquid-krypton calorimeter (LKr) vetoes forward ($1~{\rm mrad}<\theta<8.5~{\rm mrad}$), high-energy photons. A ring-shaped shashlyk calorimeter (IRC) about the beamline provides coverage for photons with $\theta<1~{\rm mrad}$, while further downstream, a small-angle shashlyk calorimeter (SAC) around which the beam is deflected completes the coverage for very-small-angle photons that would otherwise escape via the beam pipe. In more than 80\% of $K^+\to\pi^+\pi^0$ events, both photons from the $\pi^0$ arrive at the LKr. In most of the rest of the events, one photon is on the LKr and one is in the LAVs. For kinematic reasons, the energies of the two photons are anticorrelated: in events with a photon in the LAVs, the energy of the photon in the LKr tends to be quite high. Given these considerations, in order to achieve the required $\pi^0$ rejection performance, the LAVs must have a maximum inefficiency of $10^{-4}$ for photons with $E>200$~MeV, while the LKr must have a maximum inefficiency of $10^{-3}$ for photons with $E>1$~GeV and $10^{-5}$ for photons with $E>10$~GeV. The LAV detectors consist of rings of lead-glass blocks salvaged from the OPAL electromagnetic calorimeter barrel \cite{Ambrosino:2011xxx}. The detection efficiency of these blocks for 200~MeV electrons was measured at the Frascati BTF and found to be about $(1\pm1)\times10^{-4}$. The LKr is a quasi-homogeneous ionization calorimeter of depth $27\,X_0$ and with a transverse segmentation of $2\times2$~cm$^2$ \cite{Fanti:2007vi}. In NA48, $K\to\pi\pi^0$ and $e^-$ bremsstrahlung events were used to demonstrate that the inefficiency of the LKr for detection of photons with $E>10$~GeV is less than $8\times10^{-6}$. \paragraph{Trigger and data acquisition} The experiment makes use of an integrated trigger and data acqusition system with three trigger levels. The lowest level, level 0, is implemented directly in the digital readout card for each detector subsystem. The detector hits are resolved into quantities such as the number of quadrants of the trigger hodoscope hit, the number of LKr clusters of energy greater than a given threshold, or the number of hits in MUV 3. These quantities can then be used in trigger logic to decide which events will be read out for level 1. Level 0 will process about 10 MHz of ``primitive'' detector hits; about 1 MHz of events will be read out for level 1. The level 1 trigger is implemented in software running on dedicated PCs for each detector. It is the first asynchronous trigger level and will reduce the rate of events seen by level 2 by an order of magnitude. The level 2 trigger is implemented in the event builder running on the acquisition PC farm; it is the first trigger level at which the configurations of entire events are used. The O(100~kHz) of events input to level 2 are reduced to a few kHz of events ultimately written to disk. \paragraph{Expected performance} Based on the above considerations, the event selection criteria can be listed: \begin{itemize} \item One track with $15<p_\pi<35$~GeV and $\pi$ identification in the RICH. \item No $\gamma$'s in the LAVs, LKr, IRC, or SAC. \item No $\mu$ hits in the MUVs. \item One beam particle in the Gigatracker with $K$ identification by the CEDAR. \item $z_{\rm rec}$, the vertex between primary and secondary tracks, inside the 60-m fiducial volume. \end{itemize} Simulations then indicate that the acceptance for signal events is a little more than 10\%, corresponding to about 45 signal events accepted per year of data taking. The $\pi^+\pi^0$ background is estimated to be about 10\% while the $\mu\nu$ background is around 3\%. Including backgrounds from all other channels, the total background is under 20\%. \section{Other rare kaon and pion decays at NA62} The measurement of ${\rm BR}(K^+\to\pi^+\nu\bar{\nu})$ will require a sample of $10^{13}$ $K^+$ decays in NA62's fiducial volume. These will be accompanied by $2\times10^{12}$ $\pi^0$ decays from $K\to\pi\pi^0$ (BR = 21\%). Studies of the prospects for searches for lepton-flavor (LF) or -number (LN) violating and other forbidden decays with NA62 are underway. Preliminary estimates of the single-event sensitivties (defined as the reciprocal of the product of the number of accepted decays) give results at the level of $10^{-12}$ for $K^+$ decays to states such as $\pi^+\mu^\pm e^\mp$ (LFV), $\pi^-\mu^+e^+$ (LFNV), and $\pi^-e^+e^+$ or $\pi^-\mu^+\mu^+$ (LNV); and at the level of $10^{-11}$ for $\pi^0$ decays to $\mu^\pm e^\mp$ \cite{Moulson:2013oga}. As a case in point, consider the decay $K^+\to\pi^-\mu^+\mu^+$. This decay violates the conservation of lepton number. In analogy to the case of neutrinoless nuclear double beta decay, its observation would imply that the virtual neutrino exchanged between the $\mu^+$'s annhilates itself---the neutrino must have a Majorana component. The most stringent limit on BR($K^+\to\pi^-\mu^+\mu^+$) is from NA48/2 \cite{Batley:2011zz}, and NA62's prospects for improving on this limit can be extrapolated from the NA48/2 experience. In a sample of $2\times10^{11}$ $K^\pm$ decays, NA48/2 had 52 candidate events selected as $\pi^\mp\mu^\pm\mu^\pm$ for which $M(\pi\mu\mu) \sim m_K$. This was in excellent agreement with the Monte Carlo background estimate and gave the published result, ${\rm BR}(K^\pm\to\pi^\mp\mu^\pm\mu^\pm) < 1.1\times10^{-9}$ (90\%~CL). However, subsequent studies showed that the background consisted entirely of $K^\pm\to\pi^\mp\pi^\pm\pi^\pm$ events with two $\pi\to\mu$ decays, of which at least one was downstream of the spectrometer magnet and therefore poorly reconstructed. As it turns out, the increased $p_\perp$ kick of the NA62 magnet, together with the better invariant mass resolution of the straw-tube spectrometer, can eliminate this background altogether. It is then quite possible for NA62 to push the limit on this BR all the way down to its single-event sensitivity of order $10^{-12}$. Besides the LFV $\pi^0$ decays, there are a number of rare or forbidden $\pi^0$ decays to which NA62 has potential sensitivity, including $\pi^0\to3\gamma$, $\pi^0\to4\gamma$, and $\pi^0\to e^+e^-e^+e^-$ \cite{Moulson:2013oga}. One interesting prospect is to examine $e^+e^-\gamma$ final states of $\pi^0$ decays for evidence for a new, light vector gauge boson with weak couplings to charged SM fermions, a so-called $U$ boson, or ``dark photon''. A hypothetical $U$ boson could mediate the interactions of dark-matter constituents, as such providing explanations for various unexpected astrophysical observations and the results of certain dark-matter searches, and could also explain the $>3\sigma$ discrepancy between the measured and predicted values for the muon anomaly, $a_\mu$ (see e.g. \cite{Pospelov:2008jk,Pospelov:2008zw}). A $U$ boson with a mass of less than $m_{\pi^0}/2$ might be directly observable in $\pi^0\to U\gamma$ decays with $U\to e^+e^-$. Using an appropriate trigger, NA62 may collect $\sim$$10^8$ $\pi^0\to e^+e^-\gamma$ decays per year. Moreover, NA62 has good invariant-mass resolution for the $ee$ pair---about 1~MeV even before any attempt at kinematic fitting. Thus, NA62 should be quite competitive in this search. Another possibility is to search for the invisible decay of the $\pi^0$. The least exotic decay to an invisible final state is $\pi^0\to\nu\bar{\nu}$. This is forbidden by angular-momentum conservation if neutrinos are massless; for a massive neutrino $\nu$ of a given flavor and mass $m_\nu < m_{\pi^0}/2$ with standard coupling to the $Z$, the calculation of the decay rate is straightforward. The experimental signature $\pi^0\to{\rm invisible}$ could also arise from $\pi^0$ decays to other weakly interacting neutral states. Experimentally, the process $K^+\to\pi^+\pi^0$ with $\pi^0\to{\rm invisible}$ is very similar to $K^+\to\pi^+\nu\bar{\nu}$, with the important difference that in the former case, the $\pi^+$ is monochromatic in the rest frame of the $K^+$. This means that there is no help from kinematics in identifying $K^+\to\pi^+\pi^0$, $\pi^0\to\gamma\gamma$ with two lost photons---the limit on ${\rm BR}(\pi^0\to{\rm invisible})$ essentially depends on the performance of the photon vetoes. With stringent track-quality cuts for the $\pi^+$ and additional cuts in the $(p_{\pi^+}, \theta_{\pi^+})$ plane to deselect events with low-energy, large-angle photons, the $\pi^0$ rejection can be increased by perhaps a factor of ten with respect to the NA62 baseline rejection of $10^{-8}$. Then, NA62 would have the potential to set a limit on ${\rm BR}(\pi^0\to{\rm invisible})$ of $\sim$$10^{-9}$, which is about 100 times better than present limits. \section{Outlook} As of October 2013, the CEDAR/KTAG, almost all of the LAV system, the new LKr readout, and the SAC are installed or under installation. The remainder of the detectors are under construction. The experiment will be ready to take data in the fall of 2014. A first period of data taking during the months of November and December is expected to net the first 10\% of the NA62 data set. The remainder of the data will be collected in long runs in 2015 and 2016. Collection of the full data set will permit the measurement of ${\rm BR}(K^+\to\pi^+\nu\bar{\nu})$ to within about 10\%, which should help to shed light on the flavor structure of any new physics discovered at the LHC, or which may provide evidence for new physics even in the absence of such discoveries. NA62 is also well adapted to search for other rare decays of the $K^+$ and $\pi^0$, with single-event BR sensitivity at the level of $10^{-12}$ for lepton-flavor or -number violating decays and competitive prospects in related searches.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,043
Q: Sorting a Vector of Vectors I have a vector of vector containing elements of type long as follows: vector< vector<long> > Data(N,vector<long>(M)); I have to sort these vectors based on their values i.e. For Two vectors Data[i] & Data[j] if for some k Data[i][k]< Data[j][k] and Data[i][t]==Data[j][t] for all 0<=t<=(k-1), then Data[i] should come before Data[j] in the final vector Not for the above task I wrote the following code: sort(Data.begin(),Data.end(),myfunc); where bool myfunc(vector<long> vec1,vector<long> vec2){ int i=0; while(i<vec1.size()){ if(vec1[i]<vec2[i]){ return false; } else if(vec1[i]>vec2[i]){ return true; } i++; } return false; } However, I am not getting the desired output. In fact the input and output vectors are the same. Where did I go wrong?? Am I missing something?? A: You have a couple mistakes, but not all are evident to you (yet). bool myfunc(const vector<long>& vec1, const vector<long>& vec2){ for(size_t i = 0; i < vec1.size() && i < vec2.size(); i++){ if(vec1[i] > vec2[i]){ return false; } else if(vec1[i] < vec2[i]){ return true; } } return false; } I took the liberty of using a for loop and size_t, which are better practices here. A: I tried implementing your code as is. It appears to sort the vectors in descending order. Try toggling the trues and falses from your myfunc function. #include <iostream> #include <vector> #include <algorithm> using namespace std; bool myfunc(vector<long> vec1,vector<long> vec2){ int i=0; while(i<vec1.size()){ if(vec1[i]<vec2[i]){ return false; } else if(vec1[i]>vec2[i]){ return true; } i++; } return false; } int main() { int N = 5, M = 5; vector< vector<long> > Data(N,vector<long>(M)); for ( int i = 0; i < Data.size(); i++ ) { for ( int j = 0; j < Data[i].size(); j++ ) Data[i][j] = 5-i; } sort( Data.begin(), Data.end(), myfunc ); for ( int i = 0; i < Data.size(); i++ ) { for ( int j = 0; j < Data[i].size(); j++ ) cout << Data[i][j] << " " ; cout << endl; } return 0; } Output: 5 5 5 5 5 4 4 4 4 4 3 3 3 3 3 2 2 2 2 2 1 1 1 1 1 After toggling the trues and falses, the following code sorts the vectors in correct order. #include <iostream> #include <vector> #include <algorithm> using namespace std; bool myfunc(const vector<long> &vec1, const vector<long> &vec2){ int i=0; while(i<vec1.size()){ if(vec1[i]<vec2[i]){ return true; } else if(vec1[i]>vec2[i]){ return false; } i++; } return true; } int main() { int N = 5, M = 5; vector< vector<long> > Data(N,vector<long>(M)); for ( int i = 0; i < Data.size(); i++ ) { for ( int j = 0; j < Data[i].size(); j++ ) Data[i][j] = 5-i; } sort( Data.begin(), Data.end(), myfunc ); for ( int i = 0; i < Data.size(); i++ ) { for ( int j = 0; j < Data[i].size(); j++ ) cout << Data[i][j] << " " ; cout << endl; } return 0; } Output 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4 5 5 5 5 5
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,108
Eremon is a genus of beetles in the family Cerambycidae, containing the following species: Eremon fuscoplagiatum Breuning, 1940 Eremon mycerinoides Thomson, 1864 References Cerambycidae genera Apomecynini
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,270
On the heels of its January rebranding, 50-year old Bay Area institution HCA Houston Healthcare Clear Lake celebrates miracles and milestones HCA Houston Healthcare Clear Lake CEO Todd Caliva poses outside of his newly rebranded hospital "I'm what you'd call an HCA Lifer and have been with this company a long time. As HCA Houston Healthcare Clear Lake aligns more deeply with the broader HCA network and looks forward to major investments in women's and emergency services, our future has never been more bright. I'm proud of our staff, patients, and community for their resilience and commitment to making the Bay Area best-in-class not just for medicine, but for working and living." -Todd Caliva, CEO, HCA Houston Healthcare Clear Lake HCA Houston Healthcare Clear Lake at a Glance HCA Houston Healthcare Clear Lake is a comprehensive community hospital, differentiated by: 47 years serving Houston's Bay Area community Level II trauma center High-risk obstetrical care Level IIIb NICU Pediatric ICU Comprehensive Stroke Facility Only dedicated Heart Hospital South of Houston HCAhoustonhealthcare.com/clearlake Tags: Bay Area Houston Magazine, Clear Lake, HCA, HCA Healthcare, HCA Hospitals, HCA Houston, HCA Houston Healthcare Clear Lake, Hospitals in Houston Posted in Business, Health, News | No Comments Clear Lake Coach Krueger named winner of national basketball award Coach Bill Krueger, right, with former Vice President Dick Cheney. Legendary Clear Lake High School basketball coach Bill Krueger has been honored once again for all the amazing accomplishments in his extraordinary career. The Naismith Memorial Basketball Hall of Fame has named him the 2019 winner of the Morgan Wooten Lifetime Achievement Award for Boys' Basketball and was to have been presented the award March 27 – after our magazine went to press — in Atlanta during the McDonald's All American game. Only two people are presented the award each year – one man and one woman who coach. During his 39-year head coaching career, he compiled almost 1,100 wins – or a winning percentage of nearly 81.5 percent. Krueger retired in 1996 as the winningest high school basketball coach in the country. His teams, first at Clear Creek High and later at Clear Lake High had 30 or more wins in 18 seasons and never had a losing season in 39 years. Three of his high school teams went to the state basketball tournament, winning two state championships. His teams also won 29 district championships. "This is definitely a 'we' thing and not a 'me' thing," Krueger says. "I had all of the help you could ever get. I was in the right school districts. I had players that really loved the game and gave me 100 percent. That's all you could ask for." Krueger says he loved going to work every day he was coaching. The award is named for Morgan Wooten, who only coached high school basketball and is enshrined in the Naismith Hall of Fame. Those honored must have been a college graduate and a head coach for at least 25 years. Only one male and one female coach are inducted each year. After starting his career in the San Marcos area, where his team won the Class 3A state championship, he became head coach at Clear Creek High in the 1965-66 season, starting out with a 28-3 record and the state championship game, which Creek lost to Marshall, 73-68. Over the next six seasons, his teams compiled a 243-26 – (90.3%) – record. When the newly built Clear Lake High opened in 1972 and most of his players were transferring, he decided to join them, going on to win the 1989 state championship and reach the 1990 finals and the semi finals in 1995. His teams won at least 30 games on 13 occasions. In 1995, he was honored in a special ceremony in Fort Worth as one of the four winningest high school coaches in the country. Yet, despite all these honors, he has managed to be one of the most humble men one will ever meet, blaming any of his accomplishments on those who worked with him or played on teams he coached. Tags: Bay Area Houston Magazine, Clear Lake, Clear Lake Coach Krueger named winner of national basketball award, Clear Lake High School basketball coach Bill Krueger, Coach Kruger, The Naismith Memorial Basketball Hall of Fame Posted in Education, News | No Comments UHCL, Freeman Library partner to foster reading and writing skills in small children Educators are always looking for new, creative ways to help small children become comfortable with reading and writing. For Elaine Hendrix, Heather Pule and Roberta Raymond, all professors in University of Houston-Clear Lake's College of Education, facilitating a partnership with Clear Lake City-County Freeman Branch Library so that future educators can help parents of small children fall in love with books is a step toward making that happen. "The Freeman Library is such an excellent resource, and after meeting with (Assistant Branch Librarian Youth Services) Elizabeth Hunt and (Branch Manager) Christina Thompson, we decided to find a way to work together," Hendrix said. "Parents have already been bringing their children to the library to introduce them to reading," she said. "We teach future educators reading methods classes. Students need the hands-on practice in the field, doing community-based, experiential learning. Setting up workshops for parents and our students to work together seemed like a perfect fit." There is so much information about how best to help a child learn, it can become overwhelming. "We often get questions from parents and caregivers who want to help their child along as they grow and learn, and they're not exactly sure how to do that," Thompson said. "As a library, our goal is to connect our community with the resources and information they need. We also believe that parents and caregivers are a child's first and best teacher." Thompson said the library jumped at the opportunity to share Freeman Library's resources with UH-Clear Lake's expert faculty and rising educators. "We have already heard feedback that our families are finding the information they learned about child development to be very empowering," she said. "We have done three parent trainings, including a writing workshop for children ages 3 to 5," Raymond said. "We explained to parents what emergent writing looks like, and gave them information packets. We suggested ways to encourage writing and let them know that those scribbles they're seeing really mean something." Assistant Professor of Reading and Language Arts Heather Pule presented a workshop to parents about oral language development. "We discussed how oral language starts developing at birth and how it continues through everyday talk, through a baby's environment, and through reading from birth," Pule said. "It was wonderful to be able to talk with parents about something so important for their child's development." Hendrix added, "We have done a reading workshop for 18 month to 3-year-olds, sharing a book and doing hand games to go along. We demonstrated how to be dramatic when reading aloud, and how much it benefits children to have something read over and over again." She said that they'd also discussed how much can be taught from a simple picture book, and how to go deeper than the story to encourage verbal interaction. "It's the goal of the Children's Department to support families, child care providers and communities to help every child enter school ready to learn to read," Hunt said. "Our partnership with UHCL connects local families to experts in early literacy that they might not otherwise have access to. Any community connection the library can make that supports families as they raise their children is a useful one." Raymond said creating the connection between future educators and the librarians at Freeman helps tap into each other's resources. "We are certifying our students to become early childhood-6th grade teachers, and they have to be prepared to work at all levels since they'll be certifying at all levels," she said. "Both sides can benefit greatly from this experience." For more information about UHCL's Interdisciplinary Studies B.S. with Core Subjects EC-6, visit www.uhcl.edu/academics/degrees/interdisciplinary-studies-bs-ec-6-early-childhood-concentration. For more information about UHCL's Reading M.S. with Reading Specialist Certificate, visit www.uhcl.edu/academics/degrees/reading-ms-reading-specialist-certificate Tags: Bay Area Houston Magazine, Clear Lake, clear lake texas, Texas, UH-Clear Lake, UHCL, University of Houston-Clear Lake Houston Methodist Clear Lake Pledges $500,000 to expand Leader in Me program Joining in to celebrate the CCISD Leader In Me Houston Methodist announcement at the Clear Creek Education Foundation Kick Off Breakfast were Armand Bayou Elementary students, from left to right, 5th grader Miller Skowron, 4th grader Sophia Tamayo, 5th grader Carmen Evans, and 1st grader Violet Van Haaren; along with Port Commissioner John Kennedy, who serves on the Houston Methodist Board of Directors; CCISD Superintendent Dr. Greg Smith and attorney Levi Benton with Mahomes Bolden PC and on the hospital Board of Directors and CCEF Board of Directors. Houston Methodist Clear Lake Hospital has committed half a million dollars to the Clear Creek Education Foundation in support of the Clear Creek School District's planned expansion of The Leader In Me program in 14 schools over the next five years. The announcement was made at the Clear Creek Education Foundation's Community Kickoff Breakfast held at the CCISD Challenger Columbia Stadium Fieldhouse. Clear Creek ISD is in its third year of progressively implementing The Leader In Me program at its schools. The Leader In Me program is a whole school transformation process that teaches 21st century leadership and life skills to students and creates a culture of student empowerment based on the idea that every child can be a leader. This mindset leads to tangible improvements in the academic, behavioral and social wellbeing of participating students. With funding made possible by the Clear Creek Education Foundation, Falcon Pass Elementary and Armand Bayou Elementary schools were the first two CCISD campuses to introduce The Leader In Me program into their school culture. Both campuses have seen the trajectory of their school's academic performance rise along with student achievement and positive behaviors. Over the next five years, the Houston Methodist Clear Lake contribution will have the power to substantially increase the footprint of The Leader In Me in CCISD and positively impact an additional 13,000 students in grades pre-k through 12 throughout the District. "The impact of Houston Methodist's generous commitment will be both measurable and immeasurable for years to come," said Superintendent Dr. Greg Smith. "Our students will be even better equipped to achieve their full potential, build the skill-set necessary for success in the 21st century and access more opportunities for a better life." The announcement comes on the heels of a similar commitment of $60,000 over three years by Space Center Rotary Club to begin the program at Space Center Intermediate. Based on Stephen Covey's 7 Habits of Highly Effective People, The Leader in Me allows administrators, faculty, staff and students the opportunity to practice and celebrate the 7 Habits daily, learning how to be proactive, set goals and collaborate with others. The Leader In Me is aligned with many national and state academic standards and the process teaches students the skills needed for academic success in any setting. These skills include critical thinking, goal setting, listening and speaking, self-directed learning, presentation making and the ability to work in groups. "The Leader In Me cultivates the qualities and attitudes employers look for in today's highly competitive environment," said Houston Methodist Clear Lake Hospital CEO Dan Newman. "Self-management, independent thinking, problem-solving and other important skills like these empower our students with the tools they need to achieve success. I applaud CCISD's innovation and its commitment to adopt The Leader In Me. Houston Methodist Clear Lake is proud to play a role in providing this unique opportunity to potential future leaders." The District plans to continue to expand the program into even more schools until every CCISD campus and student has the opportunity to experience The Leader In Me and unleash their full potential. Business, government and community organizations interested in becoming a Leader In Me underwriter and partner may contact Deborah Laine, executive director of the Clear Creek Education Foundation (a 501c3 organization) at 281.284.0031 or at dlaine@ccisd.net. Tags: Bay Area Houston Magazine, CCISD, Clear Creek Education Foundation, Clear Lake, Hospitals, Houston Methodist, Houston Methodist Clear Lake Hospital, Texas, The Leader In Me program Posted in Education, Health, News | No Comments Harvest Moon, Hurricanes, and that particularly bad boy, Harvey By Andrea Todaro The Harvest Moon Regatta® is probably the best known sailboat race on the Texas Gulf Coast, although even many participants do not know its history, or the role that hurricanes have played in its evolution. The first HMR was the brainchild of three sailors from Lakewood Yacht Club. As John Broderick told the story, one Friday night at Lakewood the bar conversation turned to the need for more opportunities to sail and in particular, opportunities to get offshore. Sail maker John Cameron offered "the best sails I've had were late in the fall in the Gulf after the summer doldrums are over and the winter Northers haven't started." Competitive racer Ed Bailey agreed, saying he missed the old Texas Offshore Race Circuit ("TORC") sailing events. Broderick, a dedicated cruiser and, at the time, Lakewood's commodore, agreed and said, "why don't we organize something?" The bar talk led to discussions with members of other area sailing clubs, some of which were held at Frank's Shrimp Hut, which is now Hooter's in Seabrook. The first regatta, in 1987, was planned as a four race event beginning with a skippers' meeting on Friday, Sept. 25, and a kickoff party on Saturday, Sept. 26. Racing started on Thursday, Oct. 1 and ran through the 10th with race segments or "legs" from the Galveston jetties to Port Isabel, back up the coast to Port Aransas, back to the Galveston Jetties, and then up to Marker Two at the Clear Creek channel leading into Lakewood's homeport, Seabrook. The full moon closest to the autumnal equinox is known as the "harvest moon" and is characterized by a bright orange color; it is followed by a "hunter's moon. The "harvest moon" can occur as early as September 8th or as late as Oct. 7 which was the date of the "harvest moon" in 1987. Thus, in October 1987, with the races occurring between October 1st and the 10th, the Harvest Moon Regatta® was born. Seventeen yachts sailed that first year, with several bikini beach parties along the way. In 1988, the "harvest moon" fell on Sept. 25, so the race start was scheduled for Thursday, Sept.22, but on Sept. 8 Hurricane Gilbert destroyed the Queen's Point Marina at Port Isabel. The race start was delayed three weeks to Oct. 14 and the destination was changed to Port Aransas. Thus began the tradition of sailing to Port Aransas under a magnificent full moon, sometimes a "harvest moon" if it fell during the first seven days of October, otherwise a "hunter's moon" if it fell on or after the 8th of October. Mother Nature and Hurricane Gilbert are credited with the growth of the Harvest Moon Regatta® which grew steadily from the 17 yachts of 1987 to over 260 yachts in later years. The growth was due in large part to the perfect destination, Port Aransas. As John Broderick described it: "This ideal Texas port allows yacht owners and sailors to use minimal days from work to join in on what can be a most memorable overnight sail down the Texas coast during traditionally the best offshore sailing time of the year. And we can all do this in relative safety shared by some 200 other yachts." The race, open to sailors with no club affiliation as well as members of other area sailing clubs, became a bucket list item for many Texas sailors, many of whom had little or no offshore experience. The growth of Harvest Moon Regatta® also resulted in the formation of a charitable organization, Bay Access Sailing Foundation. Bay Access now serves as the regatta's organizing authority, with race management provided by volunteers from Lakewood Yacht Club. In 2015, Hurricane Patricia was forecast to envelop Port Aransas in a "catastrophic rain event" with the worst conditions forecast for Sunday morning when sailors would be required to leave the relative safety of Port Aransas City Marina for the trip back to Houston and various other home ports. Numerous warnings from weather officials eventually prompted race organizers to cancel the race for the first time in its history. Despite the race cancellation, the party in Port Aransas went on, and some of the more seasoned sailors sailed the course and were able to obtain slips in the City Marina harbor to ride out the gale force winds that arrived as forecast on Sunday morning. In 2017, when the actual "harvest moon" again fell in October, on the 5th, Hurricane Harvey put a new twist on the story. Hitting the Texas coast near Port Aransas on Aug. 25, the storm devastated "the ideal Texas port" and dumped torrential rain on the entire Houston area. This time, instead of canceling the race or rescheduling it, race organizers decided to reformat the race as a triangle race, similar to Lakewood's TORC event, the Heald Bank Regatta, which is traditionally held in April. Beginning and ending at the Galveston Jetties, the Regatta was followed by an awards party at Lakewood Yacht Club in Seabrook, where regatta volunteers put a special focus on raising money for the devastated Port Aransas. Port Aransas city officials were surprised to receive a check for about $20,000 from the regatta, and they are looking forward to the return of the regatta this year, although it will be many years before Port Aransas recovers to pre-Harvey prosperity. Tags: Bay Area Houston Magazine, Clear Lake, Galveston Bay, Gulf Coast Mariner Magazine, Harvest Moon Regatta, Hurricane Harvey, Hurricanes, Lakewood Yacht Club, Port Aransas, Sailing, Sailing Regattas, Sailing Texas, Texas, Texas sailing regattas, The Harvest Moon Regatta®
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,374
\section{the Calculation and the Result} \emph{The Model}--- The Hamiltonian of SSHH model is written as, \begin{equation} H_{SSHH}=H_{SSH}+H_{U}. \end{equation} The first term is SSH Hamiltonian with the following form, \begin{equation} \begin{split} H_{SSH}=-[\sum^{L}_{i=odd,\sigma}&(t+\delta t)c^{\dag}_{i,\sigma} c^{}_{i+1,\sigma}\\ +&\sum^{L}_{i=even,\sigma}(t-\delta t)c^{\dag}_{i,\sigma} c^{}_{i+1,\sigma}]+H.c., \end{split} \end{equation} where $\sigma$ represents the spin, and $n_i=\sum_{\sigma}c^{\dag}_{i,\sigma} c^{}_{i+1,\sigma}$. For simplicity, we choose $t=1$ in this paper. The last term is the well-known Hubbard interaction $H_{U}=\frac{U}{2}\sum(n_i-1)^2$ originating from fermions with different spins on the same site. As usual, we will only pay attention to half-filled case in this paper \cite{SSH2,Shenbook}. In this condition, the Hubbard interaction is equivalent to \begin{equation} H_{U}=U\sum^L_{i=1}n_{i\uparrow}n_{i\downarrow}. \end{equation} Considering the convention that two neighboring sites $i$ (odd) and $i+1$ (even) are combined to be seen as one unit cell \cite{Shenbook}, the original closed chain and the cutted subsystem should have an even number of sites. Furthermore, we cut one end of the subsystem at the first site of the closed chain, so that the subsystem has exactly the same Hamiltonian as the original closed chain without ambiguity of site number, but with different boundary condition (OBC). \emph{The free-particle case}--- When without $H_{U}$, the ground state of the whole system can be derived from the single particle picture. The density matrix is in the following form \cite{Cheong}, \begin{equation} \rho=\mathrm{det}(I-G)\mathrm{exp}(\sum_{ij}[\mathrm{ln}G(I-G)^{-1}]_{ij}c^{\dag}_i c^{}_j), \end{equation} with the Green function matrix $G_{ij}=\langle c^{\dag}_i c^{}_j\rangle$, where $c^{\dag}_i$ and $c^{}_j$ are fermion creation and annihilation operators acting on sites $i$ and $j$. It has been proved that the reduced density matrix in free Fermions case is of similar form, which means the reduced density matrix of subsystem A can be written as \cite{Cheong}, \begin{equation} \rho_A=\mathrm{det}(I-G)\mathrm{exp}(\sum_{i,j\in A}[\mathrm{ln}G(I-G)^{-1}]_{ij}c^{\dag}_i c^{}_j), \end{equation} which means sites $i$ and $j$ only belong to the subsystem A. By diagonalizing the Green function matrix $G_{ij}$, we can derive, \begin{equation} \rho_A=\mathrm{exp} \{\sum_k\mathrm{ln}(1-f_k) +\sum_{l} [\mathrm{ln}f_l(1-f_l)^{-1}]d^\dag_l d_l\}, \end{equation} in which $d^\dag_l$ is a new set of creation operators that can be obtained from $c^\dag_i$ by a unitary transformation and $f_{l}$ is the corresponding eigenvalue of $G_{ij}$. With the newly defined single particle states, the reduced density matrix of A is also diagonalized. We define $U_0$ to be those single particle states with filling particles, and $U_1$ to be those without filling particles. The entanglement spectrum, which can be obtained as minus logarithm of the eigenvalues of the reduced density operator, can be listed as the following, \begin{equation} \epsilon_i=\mathrm{ln} [\prod_{k \in U_0}(1-f_k) \prod_{l \in U_1} f_l]. \end{equation} \begin{figure}[h] \begin{minipage}{0.45\linewidth} \small{(a)} $\delta t=0.2$ \includegraphics[width=0.95\linewidth]{L200t12t08U0.eps} \centering \end{minipage} \begin{minipage}{0.45\linewidth} \small{(b)} $\delta t=0.4$ \includegraphics[width=0.95\linewidth]{L200t14t06U0.eps} \centering \end{minipage} \begin{minipage}{0.45\linewidth} \small{(c)} $\delta t=-0.2$ \includegraphics[width=0.95\linewidth]{L200t08t12U0.eps} \centering \end{minipage} \begin{minipage}{0.45\linewidth} \small{(d)} $\delta t=-0.4$ \includegraphics[width=0.95\linewidth]{L200t06t14U0.eps} \centering \end{minipage} \caption{\label{freespectrum} The chain length L is 200, and the subsystem is cut off with the length of 100. The spectrum are plotted with different $\delta t$. The number above each ground spectral line is the degree of degeneracy, so the total degree of degeneracy in the ground spectrum is the sum of these numbers.} \end{figure} Since the particle number is a good quantum number in the system, we can count the number of particles remained in the subsystem of each component corresponding to each spectral line. Additionally, we will show later that the particle number can be used to distinguish two slightly different phases when there is interaction. Considering all above, we plot the spectrum with the remaining particle number to be the horizontal ordinate. Figure~\ref{freespectrum} shows clearly the difference in the entanglement spectrum of different topological phases: in $\delta t>0$ condition, the ground spectral line is non-degenerate, while in $\delta t<0$ condition, the ground spectral lines are 16-fold degenerate. We claim that the exact value of $\delta t$ does not affect the degree of degeneracy, which means the degeneracy is a signature (symbol) of different phases. We remark that ground spectral line corresponds to the largest eigenvalues of the reduced density operator in our partition. \emph{Interacting-particle case.}--- We nest investigate the robustness of the topological ordered phase with interaction as disturbance, where the system has Hubbard interaction $H_{U}$ in Hamiltonian. We use Arnoldi method, which is an effective algorithm in finding the largest eigenvalues \cite{Arnoldi,Arnoldi1}, to achieve the ground state of the whole system and the reduced density matrix of subsystem A. Since the particle number is a good quantum number, the reduced density matrix is block diagonal, so we just need to diagonalize each block to find the entanglement spectrum.\\ \begin{figure}[h] \begin{minipage}{0.45\linewidth} \small{(a)} $\delta t=0.4,U=3$ \includegraphics[width=0.95\linewidth]{L16t14t06U3.eps} \centering \end{minipage} \begin{minipage}{0.45\linewidth} \small{(b)} $\delta t=0.4,U=-3$ \includegraphics[width=0.95\linewidth]{L16t14t06U-3.eps} \centering \end{minipage} \begin{minipage}{0.45\linewidth} \small{(c)} $\delta t=-0.4,U=3$ \includegraphics[width=0.95\linewidth]{L16t06t14U3.eps} \centering \end{minipage} \begin{minipage}{0.45\linewidth} \small{(d)} $\delta t=-0.4,U=-3$ \includegraphics[width=0.95\linewidth]{L16t06t14U-3.eps} \centering \end{minipage} \begin{minipage}{0.45\linewidth} \small{(e)} $\delta t=0.3,U=5$ \includegraphics[width=0.95\linewidth]{L16t13t07U5.eps} \centering \end{minipage} \begin{minipage}{0.45\linewidth} \small{(f)} $\delta t=-0.3,U=-5$ \includegraphics[width=0.95\linewidth]{L16t07t13U-5.eps} \centering \end{minipage}\\ \caption{\label{interactingspectrum} The chain length $L$ is 16. The spectrum are plotted with different combinations of $\delta t$ and $U$. The number above each ground spectral line is also the degree of degeneracy.} \end{figure} As shown in Fig.~\ref{interactingspectrum}, when there is interaction, in $\delta t<0$ condition, the ground spectral lines are 4-fold degeneracy, while in the $\delta t>0$ case, it is still non-degenerate. The results are slightly different in $U>0$ and $U<0$ conditions, i.e., the distribution of the degree of degeneracy according to particle number is different. Also, the exact values of $\delta t$ and $U$ do not affect the degeneracy of the spectral lines. According to the degeneracy of ES in different regimes of the model, we can draw the phase diagram shown in Fig.~\ref{phasediagram}. The upper half of the phase diagram can also be derived by investigate the entanglement entropy \cite{Wu}. In that method, the entanglement entropy is defined as the entropy difference between PBC and OBC, which means that both entanglement entropies of PBC and OBC are needed. However in our method, the spectrum can be acquired from only PBC. Furthermore, by distinguishing the different distributions of degenerate states according to their remaining particle number in the two 4-fold degeneracy conditions, we can determine two slightly different phases, phase III and phase IV in Fig.~\ref{phasediagram}, they cannot be distinguished by only investigating entanglement entropy. \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth]{phasediagram.eps}\\ \caption{\label{phasediagram} Different colors (gray levels), also labeled as I, II, III and IV, represent different phases according to the degeneracy of the ground entanglement spectrum.} \end{figure} \emph{Robustness and physical interpretation.}--- Next, we consider the robustness of our results. As is shown in Figure.~\ref{robustness}, the meaning of robustness is in two aspects, one is the length of the primary system and the other is the ratio of the subsystem in the whole system. We just need to cut off an open chain (subsystem in OBC) from the primary closed chain (system in PBC), and the open chain does not have to be exactly a half of the primary closed chain. This is not surprising, since the method is based on the difference between OBC and PBC, which means that it is related with the topology and the change on the edge. \begin{figure}[h] \begin{minipage}{0.45\linewidth} \small{(a)} $\delta t=0.4,U=3$ \includegraphics[width=0.95\linewidth]{L12t14t06U3.eps} \centering \end{minipage} \begin{minipage}{0.45\linewidth} \small{(b)} $\delta t=-0.2,U=0$ \includegraphics[width=0.95\linewidth]{L200Cut50t08t12U0.eps} \centering \end{minipage}\\ \caption{\label{robustness} (a) The chain length $L$ is 12. We compare it with Fig.~\ref{interactingspectrum}. It shows that the degeneracy is robust when the size of the system changes. (b) The length L of the original chain is 200, and the subsystem is cut off with the length of 50 in both two figures. We compare it with Fig.~\ref{freespectrum}. It shows that the degeneracy is robust when the ratio of the subsystem in the whole system changes.} \end{figure} We would like to point out that similar phenomena about the degeneracy of largest ES relating with physical properties are also referred in some other systems \cite{Pollmann,Rao}. We now present a physical picture of how the method works by providing some detailed evidences. We will still focus on SSHH model as an illustration. As we mentioned above, the method is related with bulk-edge correspondence. Considering two chains in PBC and OBC respectively, it is understandable that the primary parts of the state - the bulk parts - are very similar to each other in the two boundary conditions. When we cut off the open chain and obtain its reduced density matrix, we can also obtain the eigenstates in studying the ES, and they are certainly very similar with the basic eigenstates of the OBC system. In topologically trivial phase, this leads to that there is only one leading eigenstate, correspondingly the largest eigenvalue of the reduced density matrix in non-degenerate. However in the non-trivial phase, although the main components are similar with the bulk state in the open system, there is an uncertainty whether the edge mode is contained in the state. For example, if there is an edge mode in the open chain, the bulk state added with the edge mode or not is always the main components in the reduced density matrix, which means the low-lying ES will be two fold degenerate. The existence of the relation between edge state and ES in some free particle systems was studied in \cite{Sirker,Rao,Fidkowski}. Here, we give some more detailed evidences to show our explanations. In the non-trivial phase, there is four single-particle edge modes, so that we will have 16-fold degeneracy resulted from different combinations of whether each edge mode is contained. Also, we can count the number of remained particles corresponding to each degenerate state in theory in a simple way, and this is exactly the same with our numerical calculation. \begin{figure}[h] \begin{minipage}{0.49\linewidth} \small{(a)} \includegraphics[width=0.95\linewidth]{distributionofdegeneracyfree.eps} \centering \end{minipage} \begin{minipage}{0.49\linewidth} \small{(b)} \includegraphics[width=0.95\linewidth]{distributionofdegeneracyinteraction.eps} \centering \end{minipage}\\ \caption{\label{distribution} The numbers in the box show the distribution of degeneracy corresponding to different numbers of remaining particle with spin-up and spin-down in subsystem. The distributions are in (a)free fermion case: $U=0$, (b)interacting fermion case: $U<0$} \end{figure} This interpretation also works for case when the interaction is involved. Although the picture of single-particle edge state does not exist, it is assumed that there is edge state of pseudo particle -- so called edge elementary excitation. The edge mode is often considered as a paired-particle mode when $U<0$, because a pair of fermions each with spin-up and spin-down tend to be on the same site due to the attractive interaction. The differences of particle number between the four degenerate spectral lines shows: a pair of particles with opposite spins are added or not into the bulk state of the whole system simultaneously when $U<0$. The result is shown in Fig. \ref{distribution}. Also, this situation is similar for repulsive interaction $U>0$, since there is particle-hole symmetry in Hamiltonian of free fermions of each kind of spin in this half-filled system, we only need to see fermions with one kind of spin as holes so that the system is equivalent with case of attractive interaction $U<0$. Here, we remark that the two different kinds of edge states in attractive interaction $U<0$ and repulsive interaction $U>0$ correspond to different phases. This matches our phase diagram derived by investigating the distribution of degeneracy in ES. Our results also provide an evidence that the bulk-edge correspondence exists for systems with interactions. \emph{Conclusion.}--- We distinguish different topological phases in SSHH model by dividing a closed chain into two open chains and detecting their ES. We only need to study the PBC case of the ground state, which is easier to solve than in OBC. Moreover, the lengths of both the closed chain and the ratio of cut-off open chain do not change the result. The bulk-edge correspondence and the change from PBC to OBC for entanglement confirm the validity of our method. To be specific, we give the evidence to show that the possible edge mode in PBC leads to the degeneracy of the ground state ES. For the bulk-edge correspondence in interacting system, our explanation still applicable according to elementary excitation of edge mode, for example we can consider whether the edge mode is single-particle, paired-particle or particle-hole-paired elementary excitation. Since our method is based on the difference between OBC and PBC (the topology of the system with only short-ranged interaction) in addition with the property on the edge, we expect that it is also valid in other similar systems. \begin{acknowledgments} This work is supported by NSFC (91536108), NFFTBS (J1030310, J1103205), the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB01010000), the Young Elite Program for Faculty of Universities in Beijing, and Training Program of Innovation for Undergraduates of Beijing. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,872
Wouldn't know.... I just spent five days in bed groaning with liquid insides. South America is not being kind to me so far. Contrast 2 bouts of illness and three punctures in just three weeks on this trip with zero illnesses and zero punctures in six months on my last trip. I must have offended the cycle gods somehow, but I know not what I have done. I resolve to clean my bike... grease some bearings, lubricate all cables and sprinkle copious amounts of holy water on my tyres.... if only I could just stop running to the bathroom for long enough. Time is awasting and after five days staring at the walls, I am strong enough to ride again. The lost days are a real shame as we had arranged to meet two couples from the Stahlratte crew for a para-gliding course on the outskirts of Bucaramanga. They are travelling much more sensibly than us using the internal combustion engine and so, arrived days in advance of us. After my malingering they are now almost finished. Only the legendary inefficiency of South American countries and a badly organised course has delayed them sufficiently for us to meet them at all! The school has accommodation right by the launch zone and it's a perfect place to hang out and catch up with friends. It's still very much the rainy season in Colombia but it's warm enough to hang out on the balcony. The evenings sound slick with the fizz of water on leaves. Unfortunately we do quite a lot of hanging about.... Inexplicably it takes two days to get our tandem flights as instructors promise to arrive at 9am which in Colombia translates to 1pm and there is no chance to get our flights in. When in South America you have to lose that western mania for efficient organisation and time keeping. These are just false expectations and lead to increases in stress levels. Nah... you just have to chillax and roll with that whole 'mañana' thing. Things are definitely done differently here.... I get that. but here it's taken to the extreme! The course should include transportation - a small matter of getting yourself and a now useless wing thing back up that big hill after landing. Sadly the van has not been purchased yet and the guys are hitching rides with locals or taking taxis which can waste a couple of hours a day. On one of the days, only a single instructor turns up and he mans the landing site, meaning there is no-one at the top to check equipment before launch. Inexperienced students are forced to check each other's safety harnesses and gear. Given that a rooky crashed into a tree the previous day it would have been nice for some extra tuition and reassurance before his next flight!! Sadly I cannot recommend the school - there are just too many safety concerns and the lack of organisation was taking it's toll on the guys who were promised a 10 day course which was now running up to three weeks due to delays. My tandem pilot was taking a phone call during my launch and only noticed my harness was not attached properly when I asked him to check it. He was clipping me in and preparing to launch at the time!!
{ "redpajama_set_name": "RedPajamaC4" }
2,614
Q: Unix change desktop background seamlessly So i have a python script which generates an image, and saves over the old image which used to be the background image. I tried to make it run using crontab, but couldn't get that to work, so now i just have a bash script which runs once in my .bashrc whdn i first log in (i have a if [ firstRun ] kind of thing in there). The problem is, that every now and then, when the background updates, it flashes black before it does - which is not very nice! I currently have it running once a second, but i don't think it's the python that is causing the black screens, and more the way the image is changed over... Is there a way I can prevent these ugly black screens between updates? Here's all the code to run it, if you want to try it out... from PIL import Image, ImageDraw, ImageFilter import colorsys from random import gauss xSize, ySize = 1600,900 im = Image.new('RGBA', (xSize, ySize), (0, 0, 0, 0)) draw = ImageDraw.Draw(im) class Cube(object): def __init__(self): self.tl = (0,0) self.tm = (0,0) self.tr = (0,0) self.tb = (0,0) self.bl = (0,0) self.bm = (0,0) self.br = (0,0) def intify(self): for prop in [self.tl, self.tm, self.tr, self.tb, self.bl, self.bm, self.br]: prop = [int(i) for i in prop] def drawCube((x,y), size, colour=(255,0,0)): p = Cube() colours = [list(colorsys.rgb_to_hls(*[c/255.0 for c in colour])) for _ in range(3)] colours[0][1] -= 0 colours[1][1] -= 0.2 colours[2][1] -= 0.4 colours = [tuple([int(i*255) for i in colorsys.hls_to_rgb(*colour)]) for colour in colours] p.tl = x,y #Top Left p.tm = x+size/2, y-size/4 #Top Middle p.tr = x+size, y #Top Right p.tb = x+size/2, y+size/4 #Top Bottom p.bl = x, y+size/2 #Bottom Left p.bm = x+size/2, y+size*3/4 #Bottom Middle p.br = x+size, y+size/2 #Bottom Right p.intify() draw.polygon((p.tl, p.tm, p.tr, p.tb), fill=colours[0]) draw.polygon((p.tl, p.bl, p.bm, p.tb), fill=colours[1]) draw.polygon((p.tb, p.tr, p.br, p.bm), fill=colours[2]) lineColour = (0,0,0) lineThickness = 2 draw.line((p.tl, p.tm), fill=lineColour, width=lineThickness) draw.line((p.tl, p.tb), fill=lineColour, width=lineThickness) draw.line((p.tm, p.tr), fill=lineColour, width=lineThickness) draw.line((p.tb, p.tr), fill=lineColour, width=lineThickness) draw.line((p.tl, p.bl), fill=lineColour, width=lineThickness) draw.line((p.tb, p.bm), fill=lineColour, width=lineThickness) draw.line((p.tr, p.br), fill=lineColour, width=lineThickness) draw.line((p.bl, p.bm), fill=lineColour, width=lineThickness) draw.line((p.bm, p.br), fill=lineColour, width=lineThickness) # -------- Actually do the drawing size = 100 #Read in file of all colours, and random walk them with open("/home/will/Documents/python/cubeWall/oldColours.dat") as coloursFile: for line in coloursFile: oldColours = [int(i) for i in line.split()] oldColours = [int(round(c + gauss(0,1.5)))%255 for c in oldColours] colours = [[ int(c*255) for c in colorsys.hsv_to_rgb(i/255.0, 1, 1)] for i in oldColours] with open("/home/will/Documents/python/cubeWall/oldColours.dat", "w") as coloursFile: coloursFile.write(" ".join([str(i) for i in oldColours]) + "\n") for i in range(xSize/size+2): for j in range(2*ySize/size+2): if j%3 == 0: drawCube((i*size,j*size/2), size, colour=colours[(i+j)%3]) elif j%3 == 1: drawCube(((i-0.5)*size,(0.5*j+0.25)*size), size, colour=colours[(i+j)%3]) im2 = im.filter(ImageFilter.SMOOTH) im2.save("cubes.png") #im2.show() and then just run this: #!/bin/sh while [ 1 ] do python drawCubes.py sleep 1 done And set the desktop image to be cubes.png A: Well, you can change the current wallaper (in Gnome 3 compatible desktops) by running import os os.system("gsettings set org.gnome.desktop.background picture-uri file://%(path)s" % {'path':absolute_path}) os.system("gsettings set org.gnome.desktop.background picture-options wallpaper") A: If you're using MATE, you're using a fork of Gnome 2.x. The method I found for Gnome 2 is : gconftool-2 --set --type string --set /desktop/gnome/background/picture_filename <absolute image path>. The method we tried before would have worked in Gnome Shell.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,498
Q: Plugin integrate in forum to show user profiles What would I need to use for the user profiles to allow them to list their skills and experiences, and to allow other users to search in Forum? A: There are a few plugins able to handle your request: * *Ultimate Member : Used this to build a community site with experts in different specialties. *Peepso : A full featured community building plugin Both can be customized if needed.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,652
\section{Introduction}\label{S:Intro} The theory of elliptic boundary value problems under minimal smoothness assumptions on the boundary or the coefficients has been well-studied in the real-valued setting and there is a rich literature of results and applications. By contrast, the literature in the complex valued setting is much more limited. Some important milestones in the study of complex coefficient operators exist: notable is the resolution of the Kato problem, which can be formulated as a ``Regularity" boundary value problem for operators that satisfy very specific constraints in structure (\cite{AHLMT}). The challenge in this theory is that solutions to complex coefficient elliptic operators are not necessarily continuous, nor do they satisfy even a weak maximum principle, which is typically the starting point for the study of boundary value problems. Some of the results for complex coefficient operators have been proven under the assumption of interior H\"older regularity (the De Giorgi-Nash-Moser theory), yet it is not clear how this assumption can be correlated with quantitative verifiable assumptions on the operators. In this paper we continue the investigation of solvability of boundary value problems for complex valued second order divergence form elliptic operators under a structural algebraic assumption on the matrix known as {\it $p$-ellipticity}. This structural assumption was introduced independently in \cite{DPcplx} and \cite{CD}, and is a quantitative strengthening of a condition related to $L^p$-contractivity of elliptic operators that was discovered by Cialdea and Maz'ya (\cite{CM1}). When the coefficients of the operator are real, or when $p=2$, the $p$-ellipticity condition is equivalent to the familiar uniform ellipticity condition. In \cite{DPcplx} we used the $p$-ellipticity condition to establish a limited interior regularity for solutions to these complex coefficient second order divergence form operators. We think of this as a weak substitute for the De Giorgi-Nash-Moser regularity of real valued operators and, in fact, we used a variant of Moser's iteration argument to prove it. Specifically, we considered there operators of the form $\mathcal{L}=\mbox{div} A(x)\nabla +B(x)\cdot\nabla$, where the matrix $A$ is $p$-elliptic and $B$ satisfies a natural minimal scaling condition. This limited regularity theory allowed us to address the solvability of the $L^p$ Dirichlet problem for a collection of operators with complex coefficients whose matrices are in canonical form, as defined below. (\cite{DPcplx} contains a discussion of how to put an operator with lower order terms in canonical form.) This results of this paper concern the aforementioned Regularity problem, in which the boundary data is prescribed in the Sobolev space of functions whose tangential derivatives belong to some $L^p$ space. In analogy with the Dirichlet problem, where one expects to show classical convergence of a solution nontangentially to its boundary data in $L^p$ through the control of a nontangental maximal function, in this problem one expects to prove nontangential estimates for the gradients of the solution in terms of the derivatives of the data on the boundary. The formulation of these estimates must take into account the fact these solutions and their derivatives do not have pointwise values, but are merely measurable functions in certain Lebesgue spaces. We now discuss the class of elliptic operators for which Dirichlet and Regularity problems are considered. In \cite{KP01}, a class of real valued second order operators (with drift terms like those defined below) was introduced, and the elliptic measure associated to such operators was shown to belong to the $A_\infty$ class with respect to surface measure on the boundary. This implies that the Dirichlet problem for these operators is solvable with data in $L^p$ for some possibly large value of $p$. The study of this class of operators was motivated by a question of Dahlberg, which in turn was inspired by the fact that these operators arose naturally from a change of variables mapping from Lipschitz into flat domains. Specifically, the coefficients of the matrix $A$ was assumed to satisfy a Carleson measure. Examples showed that $A_\infty$ was the optimal result in this regime. Later, a slight strengthening of the Carleson measure condition was shown in \cite{DPP} to imply solvability of the Dirichlet problem for the full range $1 < p < \infty$. We refer to this condition as the ``small" Carleson condition, defined in Section \ref{S-not}. In \cite{DPR}, this Regularity problem was solved for equations of the form $\mathcal{L}=\mbox{div} A(x)\nabla$, with $A$ {\it real and elliptic}, satisfying this small Carleson condition. There are open questions even for operators with real coefficients that satisfy the Carleson condition of \cite{KP01}, such as solvability of the Regularity problem in $L^p$ for $p$ near 1. \smallskip The first main result of this paper is the solvability of the Regularity problem for boundary data $\nabla_Tf\in L^p$, under the assumption that the matrix $A$ is $p$-elliptic and satisfies small Carleson condition. \begin{theorem}\label{S3:T0} Let $1<p<\infty$, and let $\Omega$ be the upper half-space ${\mathbb R}^n_+=\{(x_0,x'):\,x_0>0\mbox{ and } x'\in{\mathbb R}^{n-1}\}$. Consider the operator $$ \mathcal Lu = \partial_{i}\left(A^0_{ij}(x)\partial_{j}u\right) $$ and assume that $\mathcal L$ can be re-written as \begin{equation} \mathcal Lu = \partial_{i}\left(A_{ij}(x)\partial_{j}u\right) +B_i\partial_iu\label{eq-oper-mod} \end{equation} where the matrix $A$ is $p$-elliptic with constants $\lambda_p,\Lambda$, $A_{00}=1$ and $\mathscr{I}m\,A_{0j}=0$ for all $1\leq j \leq n-1$. Assume also that \begin{equation}\label{Car_hatAA} d{\mu}(x)=\sup_{B_{\delta(x)/2}(x)}(|\nabla{A}|^{2}+|B|^2) \delta(x)\,dx \end{equation} is a Carleson measure in $\Omega$. Then there exist $K=K(\lambda_p,\Lambda,n,p)>0$ and $C(\lambda_p, \Lambda ,n,p)>0$ such that if \begin{equation}\label{Small-Cond} \|\mu\|_{\mathcal C} < K \end{equation} then the $L^p$ Regularity problem \begin{equation}\label{E:R2} \begin{cases} \,\,{\mathcal L}u=0 & \text{in } \Omega, \\[4pt] \quad u=f & \text{ for $\sigma$-a.e. }\,x\in\partial\Omega, \\[4pt] \tilde{N}_{p,a}(\nabla u) \in L^{p}(\partial \Omega), & \end{cases} \end{equation} is solvable and the estimate \begin{equation}\label{Main-Est} \|\tilde{N}_{p,a} (\nabla u)\|_{L^{p}(\partial \Omega)}\leq C\|\nabla_T f\|_{L^{p}(\partial \Omega;{\BBC})} \end{equation} holds for all energy solutions $u$ with datum $f$. \end{theorem} The second main theorem of the paper extends the range of solvability of $\mathcal Lu=0$ with $L^{p}$ Dirichlet boundary data for variable coefficient complex coefficient operators satisfying these Carleson conditions on coefficients. In the paper \cite{DPcplx} we have considered the solvability in the range $p\in (p_0,p_0')$ where \begin{equation}\label{eqp0} p_0=\inf\{p>1:\, \mbox{the matrix $A$ is $p$-elliptic}\}. \end{equation} Thanks to the solvability of the Regularity problem (Theorem \ref{S3:T0}) we are now able to use the technique of Z. Shen (\cite{Sh1}, \cite{Sh2}) and extend the previously established range of solvability of the Dirichlet problem to a larger interval $p\in (p_0,p_0'\frac{n-1}{n-1-p_0'})$. In particular, when $n=2,3$ or when $p_0'>n-1$, the range of solvability is extended to all $p\in (p_0,\infty).$ \begin{theorem}\label{S3:T1} Consider the operator $$ \mathcal Lu = \partial_{i}\left(A^0_{ij}(x)\partial_{j}u\right) $$ in the domain $\Omega={\mathbb R}^n_+=\{(x_0,x'):\,x_0>0\mbox{ and } x'\in{\mathbb R}^{n-1}\}$. Asume again that $\mathcal L$ can be rewritten as \eqref{eq-oper-mod} and let $p_0$ be defined as in \eqref{eqp0} and let $p_{max}=\infty$ when $p_0'\ge n-1$, $$p_{\max}=\frac{p_0'(n-1)}{n-1-p_0'},$$ otherwise. Finally consider any $p_0<p<p_{max}$. Assume further that the matrix $A$, satisfies $A_{00}=1$, $\mathscr{I}m\,A_{0j}=0$ for all $1\leq j \leq n-1$ and let \begin{equation}\label{Car_hatAA-2} d{\mu}(x)=\sup_{B_{\delta(x)/2}(x)}(|\nabla{A}|^{2}+|B|^2) \delta(x)\,dx \end{equation} be a Carleson measure in $\Omega$. Then there exist $K=K(\lambda_p,\Lambda,n,p)>0$ and $C(\lambda_p, \Lambda , n,p)>0$ such that if \begin{equation}\label{Small-Cond-2} \|\mu\|_{\mathcal C} < K \end{equation} then the $L^p$-Dirichlet problem \begin{equation}\label{E:D2} \begin{cases} \,\,{\mathcal L}u=0 & \text{in } \Omega, \\[4pt] \quad u=f & \text{ for $\sigma$-a.e. }\,x\in\partial\Omega, \\[4pt] \tilde{N}_{p,a}(u) \in L^{p}(\partial \Omega), & \end{cases} \end{equation} is solvable and the estimate \begin{equation}\label{Main-Est2} \|\tilde{N}_{p,a} (u)\|_{L^{p}(\partial \Omega)}\leq C\|f\|_{L^{p}(\partial \Omega;{\BBC})} \end{equation} holds for all energy solutions $u$ with datum $f$. \end{theorem} In particular observe that $p_{max}=\infty$ in dimensions $2$ and $3$ and that when $n\ge 4$ $$p_{\max}>\frac{2(n-1)}{n-3}.$$ \begin{remark} We address at the end of section \ref{S-not} how we can rewrite any operator $\mathcal L$ as \eqref{eq-oper-mod} with coefficients $A_{0j}$ real and $A_{00}=1$. We require this particular form of our operator in the main section \ref{S4} of this paper. \end{remark} In the statement of these two theorems, we've used some notation that will be defined in subsequent sections. We will also recall there the concept of Carleson measure, discuss the notions of $L^p$ solvability and energy solutions and define $\tilde{N}_{p}$ which is a variant of the nontangential maximal function defined using $L^p$ averages of the solution $u$. \begin{remark} Lemma 2.6 of \cite{DPcplx} shows that $L^q$ averages of solutions on interior balls are controlled bp $L^q$ averages for $q$ in the range $(p_0,\frac{p_0'n}{n-2})$, extending beyond the range of $p$-ellipticity. Thus one can use the $N_q$ nontangential maximal function for such $q$ in the estimate \eqref{Main-Est2}. The arguments for Theorem \ref{S3:T0} show that, similarly, the gradient $\nabla u$ of solutions to the Regularity problem will be locally $L^q$ integrable for $q$ in the range $(p_0,\frac{p_0'n}{n-2})$. In particular, by Sobolev embedding, solvability of the Regularity problem in the regime $p_0' > n-2$ implies that solutions are H\"older continuous. \end{remark} The paper is organized as follows. In Section \ref{S-not}, we define the concept of $p$-ellipticity, the nontangential maximal function, the $p$-adapted square function, Carleson measures and the notions of solvability of these various boundary value problems. In Section \ref{SS:43}, we establish bounds for the nontangential maximal function by the square function. The estimates for the $p$-adapted square functions are established in Section \ref{S4}. In light of \eqref{eq-partial}, square functions that involve tangential derivatives are easier to handle and we begin by bounding these. We then show that, essentially, the square function with the full gradient can be controlled by the square functions of tangential derivatives. In Sections \ref{S5} and \ref{S6}, we present the proofs of the two main theorems. \section{Basic notions and definitions}\label{S-not} \subsection{$p$-ellipticity} The concept of $p$-ellipticity was introduced in \cite{CM}, where the authors investigated the $L^p$-dissipativity of second order divergence complex coefficient operators. Later, Carbonaro and Dragi\v{c}evi\'c \cite{CD} gave an equivalent definition and coined the term ``$p$-ellipticity". It is this definition that was most useful for the results of \cite{DPcplx}. To introduce this, we define, for $p>1$, the ${\mathbb R}$-linear map ${\mathcal J}_p:{\mathbb C}^n\to {\mathbb C}^n$ by $${\mathcal J}_p(\alpha+i\beta)=\frac{\alpha}{p}+i\frac{\beta}{p'}$$ where $p'=p/(p-1)$ and $\alpha,\beta\in{\mathbb R}^n$. \begin{definition}\label{pellipticity} Let $\Omega\subset{\mathbb R}^n$. Let $A:\Omega\to M_n(\mathbb C)$, where $M_n(\mathbb C)$ is the space of $n\times n$ complex valued matrices. We say that $A$ is $p$-elliptic if for a.e. $x\in\Omega$ \begin{equation}\label{pEll} \mathscr{R}e\,\langle A(x)\xi,{\mathcal J}_p\xi\rangle \ge \lambda_p|\xi|^2,\qquad\forall \xi\in{\mathbb C}^n \end{equation} for some $\lambda_p>0$ and there exists $\Lambda>0$ such that \begin{equation} |\langle A(x)\xi,\eta \rangle| \le \Lambda |\xi| |\eta|, \qquad\forall \xi, \,\eta\in{\mathbb C}^n. \end{equation} \end{definition} It is now easy to observe that the notion of $2$-ellipticity coincides with the usual ellipticity condition for complex matrices. As shown in \cite{CD} if $A$ is elliptic, then there exists $\mu(A)>0$ such that $A$ is $p$-elliptic if and only if $\left|1-\frac2p\right|<\mu(A).$ Also $\mu(A)=\infty$ if and only if $A$ is real valued. \subsection{Nontangential maximal and square functions} \label{SS:NTS} On a domain of the form \begin{equation}\label{Omega-111} \Omega=\{(x_0,x')\in\BBR\times{\BBR}^{n-1}:\, x_0>\phi(x')\}, \end{equation} where $\phi:\BBR^{n-1}\to\BBR$ is a Lipschitz function with Lipschitz constant given by $L:=\|\nabla\phi\|_{L^\infty(\BBR^{n-1})}$, define for each point $x=(x_0,x')\in\Omega$ \begin{equation}\label{PTFCC} \delta(x):=x_0-\phi(x')\approx\mbox{dist}(x,\partial\Omega). \end{equation} In other words, $\delta(x)$ is comparable to the distance of the point $x$ from the boundary of $\Omega$. \begin{definition}\label{DEF-1} A cone of aperture $a>0$ is a non-tangential approach region to the point $Q=(x_0,x') \in \partial\Omega$ defined as \begin{equation}\label{TFC-6} \Gamma_{a}(Q)=\{(y_0,y')\in\Omega:\,a|x_0-y_0|>|x'-y'|\}. \end{equation} \end{definition} We require $1/a>L$, otherwise the aperture of the cone is too large and might not lie inside $\Omega$. When $\Omega=\BBR^n_+$ all parameters $a>0$ may be considered. Sometimes it is necessary to truncate $\Gamma(Q)$ at height $h$, in which case we write \begin{equation}\label{TRe3} \Gamma_{a}^{h}(Q):=\Gamma_{a}(Q)\cap\{x\in\Omega:\,\delta(x)\leq h\}. \end{equation} \begin{equation}\label{SSS-1} \|S_{a}(w)\|^{2}_{L^{2}(\partial\Omega)}\approx\int_{\Omega}|\nabla w(x)|^{2}\delta(x)\,dx. \end{equation} In [DPP], a ``$p$-adapted" square function was introduced. The usual square function is the $p$-adapted square function when $p=2$. In the following definition, when $p<2$ we use the convention that the expression $|\nabla w(x)|^{2} |w(x)|^{p-2}$ is zero whenever $\nabla w(x)$ vanishes. \begin{definition}\label{D:Sp} For $\Omega \subset \mathbb{R}^{n}$, the $p$-adapted square function of $w:\Omega\to {\mathbb C}$ such that $w|w|^{p/2-1}\in W^{1,2}_{loc}(\Omega; {\BBC})$ at $Q\in\partial\Omega$ relative to the cone $\Gamma_{a}(Q)$ is defined by \begin{equation}\label{yrddp} S_{p,a}(w)(Q):=\left(\int_{\Gamma_{a}(Q)}|\nabla w(x)|^{2} |w(x)|^{p-2}\delta(x)^{2-n}\,dx\right)^{1/2} \end{equation} and, for each $h>0$, its truncated version is given by \begin{equation}\label{yrddp.2} S_{p,a}^{h}(w)(Q):=\left(\int_{\Gamma_{a}^{h}(Q)}|\nabla w(x)|^{2}|w(x)|^{p-2}\delta(x)^{2-n}\,dx\right)^{1/2}. \end{equation} We further introduce the following convention. When $w:\Omega\to {\mathbb C}^k$ with component functions $(w_i)_{1\le i\le k}$ we denote by $S_{p,a}(w)(Q)$ the following sum \begin{equation}\label{yrddp.4} S_{p,a}(w)(Q):=\sum_{i=1}^k S_{p,a}(w_i)(Q), \end{equation} hence for example if $w=\nabla_T u$ then $S_{p,a}(\nabla_T u)(Q)$ denotes $$\sum_{i=1}^{n-1}S_{p,a}(\partial_i u)(Q).$$ \end{definition} It is not immediately clear that the integrals appearing in \eqref{yrddp} are well-defined. However, in \cite{DPcplx}, it was shown that the expressions of the form $|\nabla w(x)|^{2} |w(x)|^{p-2}$, when $w$ is a solution of $\mathcal Lw=0$, are locally integrable and hence the definition of $S_p(w)$ makes sense for such $p$ whenever $p$-ellipticity holds. This in particular applies with some modifications to $w=\nabla_T u$ on ${\mathbb R}^n_+$. Each component of $w$ solves a PDE ${\mathcal L}(w_k)=\partial_i((\partial_k A_{ij})w_j)-\partial_k(B_i)w_i$. The righthand side of this PDE is good enough for the regularity theory developed in \cite{DPcplx} to apply to this more complicated system of equations as well. \vglue1mm A simple application of Fubini's theorem gives \begin{equation}\label{SSS-2} \|S_{p,a}(w)\|^{p}_{L^{p}(\partial\Omega)}\approx\int_{\Omega}|\nabla w(x)|^{2}|w(x)|^{p-2}\delta(x)\,dx. \end{equation} \begin{definition}\label{D:NT} For $\Omega\subset\mathbb{R}^{n}$ as above, and for a continuous $w: \Omega \rightarrow \mathbb C$, the nontangential maximal function ($h$-truncated nontangential maximal function) of $u$ at $Q\in\partial\Omega$ relative to the cone $\Gamma_{a}(Q)$, is defined by \begin{equation}\label{SSS-2a} N_{a}(w)(Q):=\sup_{x\in\Gamma_{a}(Q)}|w(x)|\,\,\text{ and }\,\, N^h_{a}(w)(Q):=\sup_{x\in\Gamma^h_{a}(Q)}|w(x)|. \end{equation} Moreover, we shall also consider a related version of the above nontangential maximal function. This is denoted by $\tilde{N}_{p,a}$ and is defined using $L^p$ averages over balls in the domain $\Omega$. Specifically, given $w\in L^p_{loc}(\Omega;{\BBC})$ we set \begin{equation}\label{SSS-3} \tilde{N}_{p,a}(w)(Q):=\sup_{x\in\Gamma_{a}(Q)}w_p(x)\,\,\text{ and }\,\, \tilde{N}_{p,a}^{h}(w)(Q):=\sup_{x\in\Gamma_{a}^{h}(Q)}w_p(x) \end{equation} for each $Q\in\partial\Omega$ and $h>0$ where, at each $x\in\Omega$, \begin{equation}\label{w} w_p(x):=\left(\Xint-_{B_{\delta(x)/2}(x)}|w(z)|^{p}\,dz\right)^{1/p}. \end{equation} \end{definition} Above and elsewhere, a barred integral indicates an averaging operation. Observe that, given $w\in L^p_{loc}(\Omega;{\BBC})$, the function $w_p$ associated with $w$ as in \eqref{w} is continuous and $\tilde{N}_{p,a}(w)=N_a(w_p)$ everywhere on $\partial\Omega$. The $L^2$-averaged nontangential maximal function was introduced in \cite{KP2} in connection with the Neuman and regularity problem value problems. In the context of $p$-ellipticity, Proposition 3.5 of \cite{DPcplx} shows that there is no difference between $L^2$ averages and $L^p$ averages when $w=u$ solves $\mathcal Lu=0$ and that $\tilde{N}_{p,a}(u)$ and $\tilde{N}_{2,a'}(u)$ are comparable in $L^r$ norms for all $r>0$ and all allowable apertures $a,a'$. In this paper we shall consider $w=\nabla u$. However, as it turns out a modification of the argument following (2.20) of \cite{DPcplx} applies in our case: each component $w^k=\partial_k u$ of $w$ solves an equation similar to one considered in \cite{DPcplx}, namely \begin{equation}\label{eq-partial-z} {\mathcal L}w_k=\partial_i(A_{ij}\partial_jw_k) = \partial_i((\partial_k A_{ij})w_j). \end{equation} Observe that the condition $|\nabla A(x)| \leq K (\delta(x))^{-1}$ implies that the right hand side of \eqref{eq-partial-z} is the divergence of a vector in $L^2$ and thus the solutions $w_k$ will belong $W^{1,2}_{\text{loc}}$. We record the regularity results in the following Proposition. \begin{proposition}\label{Regularity} Suppose that $u\in W^{1,2}_{loc}(\Omega;{\BBC})$ is the weak solution of ${\mathcal L}u=\mbox{\rm div} A(x)\nabla u= 0$ in $\Omega$. Let $p_0 = \inf \{p>1: \text{$A$ is $p$-elliptic}\}$, and suppose that $A$ has bounded measurable coefficients satisfying \begin{equation}\label{Bcond} |\nabla A(x)| \leq K (\delta(x))^{-1}, \quad\forall x \in \Omega \end{equation} where the constant $K$ is uniform, and $\delta(x)$ denotes the distance of $x$ to the boundary of $\Omega$. Then we have the following improvement in the regularity of $\nabla u$. For any $B_{4r}(x)\subset\Omega$ and $\varepsilon>0$ there exists $C_\varepsilon>0$ such that \begin{equation}\label{RHthm1} \left(\Xint-_{B_{r}(x)} |\nabla u|^{p} \,dy\right)^{1/{p}} \le C_\varepsilon\left(\Xint-_{B_{2 r}(x)} |\nabla u|^{q} \,dy\right)^{1/{q}}+\varepsilon \left(\Xint-_{B_{2 r}(x)} |\nabla u|^{2} \,dy\right)^{1/{2}} \end{equation} for all $p,q \in (p_0, \frac{p'_0n}{n-2})$. (Here $p_0'=p_0/(p_0-1)$ and when $n=2$ one can take $p,q\in (p_0,\infty)$.) The constant in the estimate depends on the dimension, the $p$-ellipticity constants, $\Lambda$, $K$ and $\varepsilon>0$ but not on $x\in\Omega$, $r>0$ or $u$. It follows that for any boundary ball $\Delta=\Delta_d\subset\partial\Omega$, for any $p,q\in (p_0, \frac{p'_0n}{n-2})$ and for any allowed aperture parameters $a,a'>0$ there exists $m=m(a,a')>1$ such that \begin{equation}\label{NN} \|\tilde{N}^d_{p,a}(\nabla u)\|_{L^r\Delta_d)} \lesssim \|\tilde{N}^{2d}_{q,a'}(\nabla u)\|_{L^r(m\Delta_d)} \end{equation} for all $r >0$. We also have for the same range of $p$'s the estimate \begin{equation}\label{RHthm2} \left(r^2\Xint-_{B_{r}(x)} |\nabla \partial_ku|^2|\partial_k u|^{p-2} \,dy\right)^{1/{p}} \le C_p \left(\Xint-_{B_{2 r}(x)} |\nabla u|^{2} \,dy\right)^{1/{2}}, \end{equation} for all $k=0,1,2,\dots,n-1$. \end{proposition} \subsection{Carleson measures} \label{SS:Car} We begin by recalling the definition of a Carleson measure in a domain $\Omega$ as in \eqref{Omega-111}. For $P\in{\BBR}^n$, define the ball centered at $P$ with the radius $r>0$ as \begin{equation}\label{Ball-1} B_{r}(P):=\{x\in{\BBR}^n:\,|x-P|<r\}. \end{equation} Next, given $Q \in \partial\Omega$, by $\Delta=\Delta_{r}(Q)$ we denote the surface ball $\partial\Omega\cap B_{r}(Q)$. The Carleson region $T(\Delta_r)$ is then defined by \begin{equation}\label{tent-1} T(\Delta_{r}):=\Omega\cap B_{r}(Q). \end{equation} \begin{definition}\label{Carleson} A Borel measure $\mu$ in $\Omega$ is said to be Carleson if there exists a constant $C\in(0,\infty)$ such that for all $Q\in\partial\Omega$ and $r>0$ \begin{equation}\label{CMC-1} \mu\left(T(\Delta_{r})\right)\leq C\sigma(\Delta_{r}), \end{equation} where $\sigma$ is the surface measure on $\partial\Omega$. The best possible constant $C$ in the above estimate is called the Carleson norm and is denoted by $\|\mu\|_{\mathcal C}$. \end{definition} In all that follows we now assume that the coefficients of the matrix $A$ and $B$ of the elliptic operator $\mathcal{L}=\mbox{div} A(x)\nabla +B(x)\cdot\nabla$ satisfies the following natural conditions. First, we assume that the entries $A_{ij}$ of $A$ are in ${\rm Lip}_{loc}(\Omega)$ and the entries of $B$ are $L^\infty_{loc}(\Omega)$. Second, we assume that \begin{equation}\label{CarA} d\mu(x)=\sup_{B_{\delta(x)/2}(x)}[|\nabla A|^2+|B|^2]\delta(x) \,dx \end{equation} is a Carleson measure in $\Omega$. Sometimes, and for certain coefficients of $A$, we will assume that their Carleson norm $\|\mu\|_{\mathcal{C}}$ is sufficiently small. The fact that $\mu$ is a Carleson allows one to relate integrals in $\Omega$ with respect to $\mu$ to boundary integrals involving the nontangential maximal function. We have the following result for our averaged nontangential maximal function (c.f. \cite{DPcplx}). \begin{theorem}\label{T:Car} Suppose that $d\nu=f\,dx$ and $d\mu(x)=\left[\sup_{B_{\delta(x)/2}(x) }|f|\right]dx$. Assume that $\mu$ is a Carleson measure. Then there exists a finite constant $C=C(L,a)>0$ such that for every $u\in L^{p}_{loc}(\Omega;{\BBC})$ one has \begin{equation}\label{Ca-222} \int_{\Omega}|u(x)|^p\,d\nu(x)\leq C\|\mu\|_{\mathcal{C}} \int_{\partial\Omega}\left(\tilde{N}_{p,a}(u)\right)^p\,d\sigma. \end{equation} Furthermore, consider $\Omega={\mathbb R}^n_+$ where $\mu$ and $\nu$ are measures as above supported in $\Omega$ and $\delta(x_0,x')=x_0$. Let $h:{\mathbb R}^{n-1}\to {\mathbb R}^+$ be a Lipschitz function with Lipschitz norm $L$ and $$\Omega_h=\{(x_0,x'):x_0>h(x')\}.$$ Then for any $\Delta\subset {\mathbb R}^{n-1}$ with $\sup_{\Delta} h\le \mbox{diam}(\Delta)/2$ we have \begin{equation}\label{Ca-222-x} \int_{\Omega_h\cap T(\Delta)}|u(x)|^p\,d\nu(x)\leq C\|\mu\|_{\mathcal{C}} \int_{\partial\Omega_h\cap T(\Delta)}\left(\tilde{N}_{p,a,h}(u)\right)^p\,d\sigma. \end{equation} Here for a point $Q=(h(x'),x')\in\partial\Omega_h$ we define \begin{equation} \tilde{N}_{p,a,h}(u)(Q) = \sup_{\Gamma_a(Q)}w,\label{eq-Nh} \end{equation} where \begin{equation}\label{TFC-6x} \Gamma_{a}(Q)=\Gamma_{a}((h(x'),x'))=\{y=(y_0,y')\in\Omega:\,a|h(x')-y_0|>|x'-y'|\} \end{equation} and the $L^p$ averages $w$ are defined by \eqref{w} where the distance $\delta$ is taken with respect to the domain $\Omega={\mathbb R}^n_+$. \end{theorem} \subsection{The $L^p$-Dirichlet problem} We recall the definition of $L^p$ solvability of the Dirichlet problem. When an operator $\mathcal L$ is as in Theorem \ref{S3:T1} is uniformly elliptic (i.e. $2$-elliptic) the Lax-Milgram lemma can be applied and guarantees the existence of weak solutions. That is, given any $f\in \dot{B}^{2,2}_{1/2}(\partial\Omega;{\BBC})$, the homogenous space of traces of functions in $\dot{W}^{1,2}(\Omega;{\BBC})$, there exists a unique (up to a constant) $u\in \dot{W}^{1,2}(\Omega;{\BBC})$ such that $\mathcal{L}u=0$ in $\Omega$ and ${\rm Tr}\,u=f$ on $\partial\Omega$. We call these solutions \lq\lq energy solutions" and use them to define the notion of solvability of the $L^p$ Dirichlet problem. \begin{definition}\label{D:Dirichlet} Let $\Omega$ be the Lipschitz domain introduced in \eqref{Omega-111} and fix an integrability exponent $p\in(1,\infty)$. Also, fix an aperture parameter $a>0$. Consider the following Dirichlet problem for a complex valued function $u:\Omega\to{\BBC}$: \begin{equation}\label{E:D} \begin{cases} 0=\partial_{i}\left(A_{ij}(x)\partial_{j}u\right) +B_{i}(x)\partial_{i}u & \text{in } \Omega, \\[4pt] u(x)=f(x) & \text{ for $\sigma$-a.e. }\,x\in\partial\Omega, \\[4pt] \tilde{N}_{2,a}(u) \in L^{p}(\partial \Omega), & \end{cases} \end{equation} where the usual Einstein summation convention over repeated indices ($i,j$ in this case) is employed. We say the Dirichlet problem \eqref{E:D} is solvable for a given $p\in(1,\infty)$ if there exists a $C=C(p,\Omega)>0$ such that for all boundary data $f\in L^p(\partial\Omega;{\BBC})\cap \dot{B}^{2,2}_{1/2}(\partial\Omega;{\BBC})$ the unique energy solution satisfies the estimate \begin{equation}\label{y7tGV} \|\tilde{N}_{2,a} (u)\|_{L^{p}(\partial\Omega)}\leq C\|f\|_{L^{p}(\partial\Omega;{\BBC})}. \end{equation} Similarly, we say the Regularity problem for the same PDE is solvable for a given $p\in(1,\infty)$ if there exists a $C=C(p,\Omega)>0$ such that for all boundary data $f\in \dot{W}^{1,p}(\partial\Omega;{\BBC})\cap \dot{B}^{2,2}_{1/2}(\partial\Omega;{\BBC})$ the unique (modulo constants) energy solution satisfies the estimate \begin{equation}\label{y7tGVx} \|\tilde{N}_{2,a} (\nabla u)\|_{L^{p}(\partial\Omega)}\leq C\|\nabla_Tf\|_{L^{p}(\partial\Omega;{\BBC})}. \end{equation} \end{definition} \noindent{\it Remark.} Given $f\in\dot{B}^{2,2}_{1/2}(\partial\Omega;{\BBC})\cap L^p(\partial\Omega;{\BBC})$ the corresponding energy solution constructed above is unique (since the decay implied by the $L^p$ estimates eliminates constant solutions). As the space $\dot{B}^{2,2}_{1/2}(\partial\Omega;{\BBC})\cap L^p(\partial\Omega;{\BBC})$ is dense in $L^p(\partial\Omega;{\BBC})$ for each $p\in(1,\infty)$, it follows that there exists a unique continuous extension of the solution operator $f\mapsto u$ to the whole space $L^p(\partial\Omega;{\BBC})$, with $u$ such that $\tilde{N}_{2,a} (u)\in L^p(\partial\Omega)$ and the accompanying estimate $\|\tilde{N}_{2,a} (u) \|_{L^{p}(\partial \Omega)} \leq C\|f\|_{L^{p}(\partial\Omega;{\BBC})}$ being valid. Furthermore, as shown in the Appendix of \cite{DPcplx} for any $f\in L^p(\partial \Omega;\mathbb C)$ the corresponding solution $u$ constructed by the continuous extension attains the datum $f$ as its boundary values in the following sense. Consider the average $\tilde u:\Omega\to \mathbb C$ defined by $$\tilde{u}(x)=\Xint-_{B_{\delta(x)/2}(x)} u(y)\,dy,\quad \forall x\in \Omega.$$ Then \begin{equation} f(Q)=\lim_{x\to Q,\,x\in\Gamma(Q)}\tilde u(x),\qquad\text{for a.e. }Q\in\partial\Omega, \end{equation} where the a.e. convergence is taken with respect to the ${\mathcal H}^{n-1}$ Hausdorff measure on $\partial\Omega$. We can make a similar statement regarding nontangential convergence of gradients for solutions to the Regularity problem. That is, defining $${\tilde \nabla u}(x)=\Xint-_{B_{\delta(x)/2}(x)} \nabla u(y)\,dy,\quad \forall x\in \Omega,$$ the same proof in \cite{DPcplx} yields that \begin{equation} \nabla u(Q) =\lim_{x\to Q,\,x\in\Gamma(Q)}\tilde \nabla u(x),\qquad\text{for a.e. }Q\in\partial\Omega, \end{equation} and when $\Omega=\BBR^n_+$, \begin{equation} \nabla_T f(Q)=\lim_{x\to Q,\,x\in\Gamma(Q)}\tilde \nabla_Tu(x),\qquad\text{for a.e. }Q\in\partial\Omega, \end{equation} \vskip2mm Let us make some observations that explain the structural assumptions we have made in Theorems \ref{S3:T0} and \ref{S3:T1}. As we have already stated it suffices to formulate the result in the case $\Omega={\mathbb R}^n_+$ by using the pull-back map introduced above. Since Theorem \ref{S3:T1} requires that the coefficients have {\it small} Carleson norm this puts a restriction on the size of the Lipschitz constant $L=\|\nabla\phi\|_{L^\infty}$ of the map $\phi$ that defines the domain $\Omega$ in \eqref{Omega-111}. The constant $L$ will have also to be small (depending on $\lambda_p$, $\Lambda$, $n$ and $p$). \vglue1mm For technical reasons in the proof we also need that all coefficients $A_{0j}$, $j=0,1,\dots,n-1$ are real. This can be ensured as follows. When $j>0$ observe that we have \begin{equation} \partial_0([\mathscr{I}m\,A_{0j}]\partial_j u)=\partial_j([\mathscr{I}m\,A_{0j}]\partial_0 u)+(\partial_0 [\mathscr{I}m\,A_{0j}])\partial_ju-([\partial_j \mathscr{I}m\,A_{0j}])\partial_0u\label{eqSWAP} \end{equation} which allows to move the imaginary part of the coefficient $A_{0j}$ onto the coefficient $A_{j0}$ at the expense of two (harmless) first order terms. This does not work for the coefficient $A_{00}$. Instead we make the following observation. Suppose that the measure \eqref{CarA} associated to an operator $ \mathcal L = \partial_{i}\left(A_{ij}(x)\partial_{j}\right) +B_{i}(x)\partial_{i}$ is Carleson. Consider a related operator $ \tilde{\mathcal L} = \partial_{i}\left(\tilde{A}_{ij}(x)\partial_{j}\right) +\tilde{B}_{i}(x)\partial_{i}$, where $\tilde A = \alpha{A}$ and $\tilde B=\alpha{B} - (\partial_{i}\alpha){A}_{ij}\partial_j$, and $\alpha\in L^\infty(\Omega)$ is a complex valued function such that $|\alpha(x)|\ge \alpha_0>0$ and $|\nabla\alpha|^2x_0$ is a Carleson measure. Observe that a weak solution $u$ to $\tilde{\mathcal L}u=0$ is also a weak solution to $\mathcal Lu=0$ and that the new coefficients of $\tilde A$ and $\tilde B$ also satisfy a Carleson measure condition as in \eqref{CarA}, from the assumption on $\alpha$. We will only require that the coefficient ${\tilde A}_{00}$ is real but we may as well ensure for simplicity that it equals to $1$. Clearly, if we choose $\alpha = {A}_{00}^{-1}$, then the new operator $\tilde L$ will have this property. When ${A}_{00}$ (and hence $\alpha$) is real, then $\tilde{A}$. Similarly, if ${A} $ is $p$-elliptic and $\mathscr{I}m\,{A}_{00}$ is sufficiently small (depending on the ellipticity constants), then $\tilde A$ will also be $p$-elliptic. However, if $\mathscr{I}m\,\alpha$ is not small, the $p$-ellipticity, after multiplication of ${A}$ by $\alpha$ may not be preserved. Thus, we assume in our main results (Theorems \ref{S3:T0} and \ref{S3:T1}) the $p$-ellipticity of the new matrix $\tilde A$ which has all coefficients $\tilde A_{0j}$, $j=0,1,\dots,n-1$ real, as this is not implied in the general case from the $p$-ellipticity of the original matrix $A$. \section{Bounds for the nontangential maximal function by the square function} \label{SS:43} We work on $\Omega=\BBR^n_+$ and we assume that the matrix $A$ is $p$-elliptic. Our aim in this section is to establish bounds for the nontangential maximal function by the square function. The approach necessarily differs from the usual argument in the real scalar elliptic case due to the fact that certain estimates, such as interior H\"older regularity of a weak solution, are unavailable for the complex coefficient case. Here we deviate from the approach take in \cite{DPcplx} where we worked with $p$-adapted square function and instead focus on the estimates for the usual square function. Our approach is similar to \cite{DHM} for elliptic systems and when possible we refer to result from there. The major innovation from \cite{DHM} is the use of an entire family of Lipschitz graphs on which the nontangential maximal function is large in lieu of a single graph constructed via a stopping time argument. This is necessary as we are using $L^2$ averages of solutions to define the nontangential maximal function and hence the knowledge of certain bounds for a solution on a single graph provides no information about the $L^2$ averages over interior balls. Let $u$ be an energy solution to $${\mathcal L}u=\partial_i(A_{ij}\partial_ju)=0,\qquad \mbox{in }\Omega={\mathbb R}^n_+.$$ Let $v=\nabla u$, that is $v_k=\partial_ku$, $k=0,1,\dots,n-1$. Let $w=w_2$ be the $L^2$ averages of $v$, that is \begin{equation}\label{w_new} w(x):=\left(\Xint-_{B_{\delta(x)/2}(x)}|v(z)|^{2}\,dz\right)^{1/2}. \end{equation} Set \begin{equation}\label{E} E_{\nu,a}:=\big\{x'\in\partial\Omega:\,N_{a}(w)(x')>\nu\big\} \end{equation} (where, as usual, $a>0$ is a fixed background parameter), and consider the map $h:\partial\Omega\to\BBR$ given at each $x'\in\partial\Omega$ by \begin{equation}\label{h} h_{\nu,a}(w)(x'):=\inf\left\{x_0>0:\,\sup_{z\in\Gamma_{a}(x_0,x')}w(z)<\nu\right\} \end{equation} with the convention that $\inf\varnothing=\infty$. We remark that $h$ differs somewhat from the function that has been used in the argument for scalar equations (cf. \cite[pp.\,212]{KP01} and \cite{KKPT}). At this point we note that $h_{\nu,a}(w,x')<\infty$ for all points $x'\in\partial\Omega$. Since $u\in \dot{W}^{1,2}(\mathbb R^n_+;\mathbb C)$ it follows that $v\in L^2(\mathbb R^n_+;\mathbb C^n)$. Thus $w$ as an $L^2$ average of $v$ is continuous on the upper half-space and decays to zero as $x_0\to\infty$. Thus $h_{\nu,a}$ is finite everywhere. We look at some further properties of this function. As in \cite{DHM} we have the following (with identical proof). \begin{lemma}\label{S3:L5} Let $w$ be as above \eqref{w_new}. Also, fix two positive numbers $\nu,a$. Then the following properties hold. \vglue2mm \noindent (i) The function $h_{\nu, a}(w)$ is Lipschitz, with a Lipschitz constant $1/a$. That is, \begin{equation}\label{Eqqq-5} \left|h_{\nu,a}(w)(x')-h_{\nu,a}(w)(y')\right|\leq a^{-1}|x'-y'| \end{equation} for all $x',y'\in\partial\Omega$. \vglue2mm \noindent (ii) Given an arbitrary $x'\in E_{\nu,a}$, let $x_0:=h_{\nu,a}(w)(x')$. Then there exists a point $y=(y_0,y')\in\partial\Gamma_{a}(x_0,x')$ such that $w(y)=\nu$ and $h_{\nu,a}(w)(y')=y_0$. \end{lemma} We also have (as in \cite{DHM}) by an identical argument: \begin{lemma}\label{l6} Let $v,\,w$ be as above. For any $a>0$ there exists $b=b(a)>a$ and $\gamma=\gamma(a)>0$ such that the following holds. Having fixed an arbitrary $\nu>0$, for each point $x'$ from the set \begin{equation}\label{Eqqq-17} \big\{x':\,N_{a}(w)(x')>\nu\mbox{ and }S_{b}(v)(x')\leq\gamma\nu\big\} \end{equation} there exists a boundary ball $R$ with $x'\in 2R$ and such that \begin{equation}\label{Eqqq-18} \big|w\big(h_{\nu,a}(w)(z'),z'\big)\big|>\nu/{2}\,\,\text{ for all }\,\,z'\in R. \end{equation} Here $S_b=S_{2,b}$ is the usual square function of $v=\nabla u$ associated with nontangential cones $\Gamma_b(.)$. \end{lemma} Given a Lipschitz function $h:{\mathbb{R}}^{n-1}\to{\mathbb{R}}$, denote by $M_h$ the Hardy-Littlewood maximal function considered on the graph of $h$. That is, given any locally integrable function $f$ on the Lipschitz surface $\Lambda_h=\{(h(z'),z'):\,z'\in\BBR^{n-1}\}$, define $(M_h f)(x):=\sup_{r>0}\Xint-_{\Lambda_h\cap B_r(x)}|f|\,d\sigma$ for each $x\in\Lambda_h$. \begin{corollary}\label{S3:L6} Let $v,w$ be defined as above and let $a>0$ be fixed. Associated with these, let $b,\,\gamma$ be as in Lemma~\ref{l6}. Then there exists a finite constant $C=C(n,p)>0$ with the property that for any $\nu>0$ and any point $x'\in E_{\nu,a}$ such that $S_{b}(v)(x')\leq\gamma\nu$ one has \begin{equation}\label{Eqqq-23} (M_{h_{\nu,a}}w)\big(h_{\nu,a}(x'),x'\big)\geq\,C\nu. \end{equation} \end{corollary} The following lemma requires a modified proof which we include below. \begin{lemma}\label{S3:L8} Consider the equation ${\mathcal Lu}=0$ with coefficients satisfying assumptions of Theorem~\ref{S3:T1}, let $v=\nabla u$ and let $w$ be defined by \eqref{w_new}. Then there exists $a>0$ with the following significance. Select $\theta\in[1/6,6]$ and, having picked $\nu>0$ arbitrary, let $h_{\nu,a}(w)$ be as in \eqref{h}. Also, consider the domain $\mathcal{O}=\{(x_0,x')\in\Omega:\,x_0>\theta h_{\nu,a}(x')\}$ with boundary $\partial\mathcal{O}=\{(x_0,x')\in\Omega:\,x_0=\theta h_{\nu,a}(x')\}$. In this context, for any surface ball $\Delta_r=B_r(Q)\cap\partial\Omega$, with $Q\in\partial\Omega$ and $r>0$ chosen such that $h_{\nu,a}(w)\leq 2r$ pointwise on $\Delta_{2r}$, one has \begin{align}\label{TTBBMM} \int_{\Delta_r}\big|v\big(\theta h_{\nu,a}(w)(\cdot),\cdot\big)\big|^2\,dx' &\leq C(1+\|\mu\|^{1/2}_{\mathcal C})\|S_{b}(v)\|_{L^p(\Delta_{2r})} \|{N}_{2,a}(w)\|_{L^p(\Delta_{2r})} \nonumber\\ &\hskip-3cm +C\|\mu\|_{\mathcal C}^{1/2}\|{N}_{2,a}(w)\|^2_{L^p(\Delta_{2r})}+C\|S_{b}(v)\|^2_{L^p(\Delta_{2r})}+\frac{c}{r}\iint_{\mathcal{K}}|v|^{2}\,dX. \end{align} Here $C=C(\Lambda,p,n)\in(0,\infty)$ and $\mathcal{K}$ is a region inside $\mathcal{O}$ of diameter, distance to the boundary $\partial\mathcal{O}$, and distance to $Q$, are all comparable to $r$. Also, the parameter $b>a$ is as in Lemma~\ref{l6}, and the cones used to define the square and nontangential maximal functions in this lemma have vertices on $\partial\Omega$. Moreover, the term $\displaystyle\iint_{\mathcal{K}}|v|^2\,dX$ appearing in \eqref{TTBBMM} may be replaced by the quantity \begin{equation}\label{Eqqq-25} Cr^{n-1}|\tilde{v}(A_r)|^2+C\int_{\Delta_{2r}}S^2_{b}(v)\,d\sigma, \end{equation} where $A_r$ is any point inside $\mathcal{K}$ (usually called a corkscrew point of $\Delta_r$) and \begin{equation}\label{Eqqq-26} \tilde{v}(X):=\Xint-_{B_{\delta(X)/2}(X)}v(Z)\,dZ. \end{equation} Finally, \eqref{TTBBMM} and \eqref{Eqqq-25} remains true even if $v$ is replaced by $v-v_0$ for any fixed $v_0\in{\mathbb C}^n$. \end{lemma} \begin{proof} Fix $\theta\in [1/6,6]$. Consider the well-known pullback transformation $\rho:\BBR^{n}_{+}\to\mathcal{O}$ appearing in works of Dahlberg, Ne\v{c}as, Kenig-Stein and others, defined by \begin{equation}\label{E:rho} \rho(x_0, x'):=\big(x_0+P_{\gamma x_0}\ast\phi(x'),x'\big), \qquad\forall\,(x_0,x')\in\mathbb{R}^{n}_{+}, \end{equation} for some positive constant $\gamma$. Here $\phi$ is a Lipschitz function describing boundary on $\partial\mathcal O$, $P$ is a nonnegative function $P\in C_{0}^{\infty}(\mathbb{R}^{n-1})$ and, for each $\lambda>0$, \begin{equation}\label{PPP-1a} P_{\lambda}(x'):=\lambda^{-n+1}P(x'/\lambda),\qquad\forall\,x'\in{\mathbb{R}}^{n-1}. \end{equation} Finally, $P_{\lambda}\ast\phi(x')$ is the convolution \begin{equation}\label{PPP-lambda} P_{\lambda}\ast\phi(x'):=\int_{\mathbb{R}^{n-1}}P_{\lambda}(x'-y')\phi(y')\,dy'. \end{equation} Observe that $\rho$ extends up to the boundary of ${\BBR}^{n}_{+}$ and maps one-to-one from $\partial {\BBR}^{n}_{+}$ onto $\partial\mathcal O$. Also for sufficiently small $\gamma\lesssim L$ the map $\rho$ is a bijection from $\overline{\mathbb{R}^{n}_{+} }$ onto $\overline{\mathcal O}$ and, hence, invertible. For a solution $u\in W^{1,2}_{loc}(\mathcal O;\BBC)$ to $\mathcal{L}u=0$ in $\mathcal O$ with Dirichlet datum $f$, consider $\tilde{u}:=u\circ\rho$ and $\tilde{f}:=f\circ\rho$. The change of variables via the map $\rho$ just described implies that $\tilde{u}\in W^{1,2}_{loc}(\mathbb{R}^{n}_{+};{\BBC})$ solves a new elliptic PDE of the form \begin{equation}\label{ESv} \partial_{i}\left(\tilde{A}_{ij}(x)\partial_{j}{\tilde{u}}\right) =0, \end{equation} with boundary datum $\tilde{f}$ on $\partial \mathbb{R}^{n}_{+}$. Hence, solving a boundary value problem for $u$ in $\Omega$ is equivalent to solving a related boundary value problem for $\tilde{u}$ in $\mathbb{R}^{n}_{+}$. Crucially, if the coefficients of the original system are such that \eqref{CarA} is a Carleson measure, then the coefficients of $\tilde{A}$ satisfy an analogous Carleson condition in the upper-half space. If, in addition, the Carleson norm of \eqref{CarA} is small and $L$ (the Lipschitz constant for the domain $\Omega$) is also small, then the Carleson norm for the new coefficients $\tilde{A}$ \begin{equation}\label{CarbarA} d\tilde{\mu}(x)=\left(\sup_{B_{\delta(x)/2}(x)}|\nabla\tilde{A}|\right)^{2}\delta(x) \,dx \end{equation} will be correspondingly small ans will only depends on the Carleson norm of the original coefficients and the Lipschitz norm of the function $h_{\nu,a}$. When the Lipschitz norm of this function goes to zero we have $$\limsup \|\tilde\mu\|_{\mathcal{C}}\le \|\mu\|_{\mathcal{C}}$$ and hence the parameter $a>0$ may be chosen large enough so that the Lipschitz norm of the function $\theta h_{\nu,a}$ is sufficiently small (at most $6/a$) such that $\|\tilde\mu\|_{\mathcal{C}}\le 2\|\mu\|_{\mathcal{C}}$. Moreover, this transformation also preserves ellipticity. \vskip1mm Having fixed a scale $r>0$, we localize to a ball $B_r(y')$ in $\BBR^{n-1}$. Let $\zeta$ be a smooth cutoff function of the form $\zeta(x_0, x')=\zeta_{0}(x_0)\zeta_{1}(x')$ where \begin{equation}\label{Eqqq-27} \zeta_{0}= \begin{cases} 1 & \text{ in } [0, r], \\ 0 & \text{ in } [2r, \infty), \end{cases} \qquad \zeta_{1}= \begin{cases} 1 & \text{ in } B_{r}(y'), \\ 0 & \text{ in } \mathbb{R}^{n}\setminus B_{2r}(y') \end{cases} \end{equation} and \begin{equation}\label{Eqqq-28} r|\partial_{0}\zeta_{0}|+r|\nabla_{x'}\zeta_{1}|\leq c \end{equation} for some constant $c\in(0,\infty)$ independent of $r$. Our goal is to control the $L^2$ norm of $\nabla u\big(\theta h_{\nu,a}(w)(\cdot),\cdot\big)$. Since after the pullback under the mapping $\rho$ the latter is comparable with the $L^2$ norm of $\nabla \tilde{u}(0,\cdot)$, we proceed to estimate this quantity. Clearly, if we establish estimate \eqref{TTBBMM} for $\nabla \tilde{u}$ on $\Delta_r\subset {\partial{\mathbb R}^n_+}$ it would imply the original estimate for $\nabla u$ on the graph of $\theta h_{\nu,a}$. Hence, let $\tilde{v}=\nabla\tilde{u}$. For $\tilde{v}_k$ where $k=1,2,\dots,n-1$ we have \begin{align}\label{u6tg} &\hskip -0.20in \int_{B_{2r}(y')}|\tilde{v}_k|^{2}(0,x')\zeta(0,x')\,dx' \nonumber\\[4pt] &\hskip 0.70in =-\iint_{[0,2r]\times B_{2r}(y')}\partial_{0}\left[|\tilde{v}_k|^{2}\zeta\right](x_0,x')\,dx_0\,dx' \nonumber\\[4pt] &\hskip 0.70in =-2\iint_{[0,2r]\times B_{2r}(y')}\mathscr{R}e\,\langle \tilde{v}_k,\partial_0 \tilde{v}_k\rangle\zeta\,dx_0\,dx' \nonumber\\[4pt] &\hskip 0.70in \quad-\iint_{[0,2r]\times B_{2r}(y')}|\tilde{v}_k|^{2}(x_0,x')\partial_{0}\zeta\,dx_0\,dx' \nonumber\\[4pt] &\hskip 0.70in =:\mathcal{A}+IV. \end{align} We further expand the term $\mathcal A$ as a sum of three terms obtained via integration by parts with respect to $x_0$ as follows: \begin{align}\label{utAA} \mathcal A &=-2\iint_{[0,2r]\times B_{2r}(y')}\mathscr{R}e\,\langle \tilde{v}_k,\partial_0 \tilde{v}_k\rangle\zeta(\partial_0x_0)\,dx_0\,dx' \nonumber\\[4pt] &=\quad 2\iint_{[0,2r]\times B_{2r}(y')}\left|\partial_{0}\tilde{v}_k\right|^{2}x_0\zeta\,dx_0\,dx' \nonumber\\[4pt] &\quad +2\iint_{[0,2r]\times B_{2r}(y')}\mathscr{R}e\,\langle \tilde{v}_k,\partial^2_{00} \tilde{v}_k\rangle x_0\zeta\,dx_0\,dx' \nonumber\\[4pt] &\quad +2\iint_{[0,2r]\times B_{2r}(y')}\mathscr{R}e\,\langle \tilde{v}_k,\partial_0 \tilde{v}_k\rangle x_0\partial_{0}\zeta\,dx_0\,dx' \nonumber\\[4pt] &=:I+II+III. \end{align} We start by analyzing the term $II$. We write $\partial^2_0\tilde{v}_k=\partial_k\partial_0\tilde{v}_0$ and integrate by parts moving the $\partial_k$ derivative. This gives \begin{align}\label{utAAA} II &=2\iint_{[0,2r]\times B_{2r}(y')}\mathscr{R}e\,\langle \tilde{v}_k,\partial_k\partial_0\tilde{v}_0\rangle x_0\zeta\,dx_0\,dx' \nonumber\\[4pt] &=-2\iint_{[0,2r]\times B_{2r}(y')}\mathscr{R}e\,\langle \partial_k\tilde{v}_k,\partial_0\tilde{v}_0\rangle x_0\zeta\,dx_0\,dx' \nonumber\\[4pt] &\quad -2\iint_{[0,2r]\times B_{2r}(y')}\mathscr{R}e\,\langle \tilde{v}_0,\partial_k \tilde{v}_k\rangle x_0\partial_{0}\zeta\,dx_0\,dx' \nonumber\\[4pt] &=II_1+II_2. \end{align} We now group together terms that are of the same type. Firstly, we have \begin{equation}\label{Eqqq-29} I+II_1\leq C(\Lambda,n)\|S_{b}(v)\|^2_{L^2(B_{2r})}. \end{equation} Here, the estimate would be true even with truncated square function $\|S^{2r}_{b}(\tilde{v})\|^2_{L^2(B_{2r})} $ which is at every point dominated by $\|S_{b}(v)\|^2_{L^2(B_{2r})}$. Next, corresponding to the case when the derivative falls on the cutoff function $\zeta$ we have \begin{align}\label{TDWW} II_2+III &\leq C(\Lambda,n)\iint_{[0,2r]\times B_{2r}}\left|\nabla \tilde{v}\right||\tilde{v}|\frac{x_0}{r}\,dx_0\,dx' \nonumber\\[4pt] &\leq C(\Lambda,n)\left(\iint_{[0,2r]\times B_{2r}}|\tilde{v}|^{2}\frac{x_0}{r^{2}}\,dx_0\,dx'\right)^{1/2} \|S^{2r}_{b}(\tilde{v})\|_{L^2(B_{2r})} \nonumber\\[4pt] &\leq C(\Lambda,n)\|S_{b}(v)\|^{p/2}_{L^p(B_{2r})}\|{N}_{p,a}(w)\|_{L^2(B_{2r})}. \end{align} Finally, the interior term $V$, which arises from the fact that $\partial_{0}\zeta$ vanishes on the set $(0,r)\cup(2r,\infty)$ may be estimated as follows: \begin{equation}\label{Eqqq-31} IV\leq\frac{c}{r}\iint_{[r,2r]\times B_{2r}}|v|^{2}\,dx_0\,dx'. \end{equation} Summing up all terms, the above analysis ultimately yields \begin{align}\label{E1:uonh} &\hskip -0.20in \int_{B_{r}(y')}|\nabla_T\tilde{u}(0,x')|^2\,dx' \nonumber\\[4pt] &\hskip 0.40in \leq C(\Lambda,n)(1+\|\mu\|^{1/2}_{\mathcal C}) \|S_{b}(v)\|_{L^p(B_{2r})}\|{N}_a(w)\|_{L^p(B_{2r})} \nonumber\\[4pt] &\hskip 0.40in \quad+C(\Lambda,n)\|S_{b}(v)\|^2_{L^p(B_{2r})} +\frac{c}{r}\iint_{[r,2r]\times B_{2r}}|v|^2\,dx_0\,dx'. \end{align} Observe also we could have done the whole calculation with a constant subtracted off $\tilde{v}_k$ without any substantial modifications. It remains to consider derivative in a transversal direction to the boundary. Instead of $\tilde{v}_0=\partial_0\tilde{u}$ it is more convenient to work with $$H=\sum_{j=0}^{n-1}\tilde{A}_{0j}\tilde{v}_j,$$ which will give us desired bound since \begin{align}\label{u6tg-otoh} & \int_{B_{2r}(y')}|\tilde{v}_0|^{2}(0,x')\zeta(0,x')\,dx' \approx \int_{B_{2r}(y')}|\tilde{A}_{00}\tilde{v}_0(0,x')|^{2}\zeta(0,x')\,dx' \nonumber\\ & \le n\left[\int_{B_{2r}(y')}|H(0,x')|^{2}\zeta(0,x')\,dx'+\sum_{j>0}\int_{B_{2r}(y')}|\tilde{A}_{0j}\tilde{v}_j(0,x')|^{2}\zeta(0,x')\,dx' \right] \nonumber\\ & \le n\int_{B_{2r}(y')}|H(0,x')|^{2}\zeta(0,x')\,dx'+C(n,\Lambda) \int_{B_{r}(y')}|\nabla_T\tilde{u}(0,x')|^2\,dx' . \end{align} The second term is OK as we have \eqref{E1:uonh}. We deal with the first term now. A calculation similar to \eqref{u6tg}-\eqref{utAA} gives us \begin{align}\label{u6tg-x} &\hskip -0.20in \int_{B_{2r}(y')}|H|^{2}(0,x')\zeta(0,x')\,dx' \nonumber\\[4pt] &\hskip 0.70in =-2\iint_{[0,2r]\times B_{2r}(y')}\mathscr{R}e\,\langle H,\partial_0 H\rangle\zeta\,dx_0\,dx' \nonumber\\[4pt] &\hskip 0.70in \quad-\iint_{[0,2r]\times B_{2r}(y')}|H|^{2}(x_0,x')\partial_{0}\zeta\,dx_0\,dx'. \end{align} The second term has a similar estimate as \eqref{Eqqq-31}. For the first term we use the fact that ${\tilde{\mathcal L}\tilde{u}}=0$ which implies that $$\partial_0 H=-\sum_{i>0}\partial_i(\tilde{A}_{ij}\tilde{v}_j).$$ It follows \begin{align}\label{u6tg-xx} &\hskip -0.20in -2\iint_{[0,2r]\times B_{2r}(y')}\mathscr{R}e\,\langle H,\partial_0 H\rangle\zeta\,dx_0\,dx' \nonumber\\[4pt] &\hskip 0.20in =2\sum_{i>0}\iint_{[0,2r]\times B_{2r}(y')}\mathscr{R}e\,\langle H,\partial_i (\tilde{A}_{ij}\tilde{v}_j)\rangle\zeta(\partial_0x_0)\,dx_0\,dx' \nonumber\\[4pt] &\hskip 0.20in =-2\sum_{i>0}\iint_{[0,2r]\times B_{2r}(y')}\mathscr{R}e\,\langle\partial_0 H,\partial_i (\tilde{A}_{ij}\tilde{v}_j)\rangle\zeta x_0\,dx_0\,dx' \nonumber\\[4pt] &\hskip 0.20in \quad+2\sum_{i>0}\iint_{[0,2r]\times B_{2r}(y')}\mathscr{R}e\,\langle\partial_i H,\partial_0 (\tilde{A}_{ij}\tilde{v}_j)\rangle\zeta x_0\,dx_0\,dx' \nonumber\\[4pt] &\hskip 0.20in \quad-2\sum_{i>0}\iint_{[0,2r]\times B_{2r}(y')}\mathscr{R}e\,\langle H,\partial_i (\tilde{A}_{ij}\tilde{v}_j)\rangle(\partial_0\zeta) x_0\,dx_0\,dx' \nonumber\\[4pt] &\hskip 0.20in \quad+2\sum_{i>0}\iint_{[0,2r]\times B_{2r}(y')}\mathscr{R}e\,\langle H,\partial_0 (\tilde{A}_{ij}\tilde{v}_j)\rangle(\partial_i\zeta) x_0\,dx_0\,dx'. \end{align} We analyse this term by term. In the last two terms, if the derivative falls on $\tilde{v}_j$ these terms are of the same nature as \eqref{TDWW} and are handled identically. When the derivative falls on the coefficients these are bounded by $$\iint_{[0,2r]\times B_{2r}(y')}|\tilde{v}|^2|\nabla \tilde{A}|\frac{x_0}r\,dx_0\,dx'\lesssim \|\mu\|^{1/2}_{\mathcal C}\|N_a(w)\|^2_{L^2},$$ where we have used the Cauchy-Schwarz inequality and the Carleson condition. The first two terms on the righthand side of \eqref{u6tg-xx} will give us the square function of $\tilde{v}$ when both derivatives fall on $\tilde{v}$ or a mixed term like \eqref{TDWW} or finally when both derivatives hit the coefficients terms bounded from above by $$\iint_{[0,2r]\times B_{2r}(y')}|\tilde{v}|^2|\nabla \tilde{A}|^2x_0\,dx_0\,dx'\lesssim \|\mu\|_{\mathcal C}\|N_a(w)\|^2_{L^2}.$$ With this in hand, the estimate in \eqref{TTBBMM} follows (by passing from $\tilde{v}$ back to $v=\nabla u$ via the map $\rho$). Finally, the claim that the term \eqref{Eqqq-25} can be used in the statement of the lemma follows from Poincar\'e inequality. See \cite{DHM} for the details. \end{proof} Finally, by using all Lemmas above we can establish the following local good-$\lambda$ inequality. We omit the proof as the argument is the same as in \cite{DHM}. \begin{lemma}\label{LGL-loc} Consider the equation ${\mathcal Lu}=0$ with coefficients satisfying assumptions of Theorem~\ref{S3:T1}. Consider any boundary ball $\Delta_d=\Delta_d(Q)\subset {\mathbb R}^{n-1}$, let $A_d=(d/2,Q)$ be its corkscrew point and let \begin{equation} \nu_0=\left(\Xint-_{B_{d/4}(A_d)}|\nabla u(z)|^2\,dz\right)^{1/2}. \end{equation} Then for each $\gamma\in(0,1)$ there exists a constant $C(\gamma)>0$ such that $C(\gamma)\to 0$ as $\gamma\to 0$ and with the property that for each $\nu>2\nu_0$ and each energy solution $u$ of ${\mathcal Lu}=0$ there holds \begin{align}\label{eq:gl2} &\hskip -0.20in \Big|\Big\{x'\in {\BBR}^{n-1}:\,\tilde{N}_a(\nabla u\chi_{T(\Delta_d)})>\nu,\,(M(S^2_b(\nabla u)))^{1/2}\leq\gamma\nu, \nonumber\\[4pt] &\hskip 0in \big(M(S^2_b(\nabla u))M(\tilde{N}_a^2(\nabla u\chi_{T(\Delta_d)}))\big)^{1/4}\leq\gamma\nu,\, \, (M(\|\mu\|_{\mathcal C}^{1/2}\tilde{N}_a^2(\nabla u\chi_{T(\Delta_d)}))^{1/2}\le\gamma\nu\Big\}\Big| \nonumber\\[4pt] &\hskip 0.50in \quad\le C(\gamma)\left|\big\{x'\in{\BBR}^{n-1}:\,\tilde{N}_a(\nabla u\chi_{T(\Delta_d)})(x')>\nu/32\big\}\right|. \end{align} Here $\chi_{T(\Delta_d)}$ is the indicator function of the Carleson region $T(\Delta_d)$ and the square function $S_b$ in \eqref{eq:gl2} is truncated at the height $2d$. Similarly, the Hardy-Littlewood maximal operator $M$ is only considered over all balls $\Delta'\subset\Delta_{md}$ for some enlargement constant $m=m(a)\ge 2$. \end{lemma} Finally we have the following. \begin{proposition}\label{S3:C7} Consider the equation ${\mathcal Lu}=0$ in $\Omega=\BBR^n_{+}$ with coefficients satisfying assumptions of Theorem~\ref{S3:T1}. The for any $p>0$ and $a>0$ there exists an integer $m=m(a)\ge 2$ and finite constants $K=K(n,\lambda,\Lambda,p,a)>0$, $C=C(n,\lambda,\Lambda,p,a)>0$ such that if $$\|\mu\|_{\mathcal C}< K,$$ then for all balls $\Delta_d\subset{\mathbb R}^{n-1}$ we have \begin{equation}\label{S3:C7:E00ooloc} \|\tilde{N}^r_a(\nabla u)\|_{L^{p}(\Delta_d)}\le C\|S^{2r}_a(\nabla u)\|_{L^{p}(\Delta_{md})}+Cd^{(n-1)/p}|\widetilde{\nabla u}(A_d)|, \end{equation} where $A_d$ denotes the corkscrew point of the ball $\Delta_d$ and $\widetilde{\nabla u}$ is as in \eqref{Eqqq-26}. We also have a global estimate for any $p>0$ and $a>0$. Under the same assumptions as above (and extra a priori assumption $\|\tilde{N}_a(\nabla u)\|_{L^{p}({\BBR}^{n-1})}<\infty$ when $p< 2$) there exists a finite constant $C=C(n,\lambda,\Lambda,p,a)>0$ such that \begin{equation}\label{S3:C7:E00oo} \|\tilde{N}_a(\nabla u)\|_{L^{p}({\BBR}^{n-1})}\le C\|S_a(\nabla u)\|_{L^{p}({\BBR}^{n-1})}. \end{equation} \end{proposition} \begin{proof} When $p>2$ \eqref{S3:C7:E00ooloc} follows immediately by a standard argument (multiplying the good-$\lambda$ inequality \eqref{eq:gl2} by $\nu^{p-1}$ and integrating in $\nu$ over the interval $(2\nu_0,\infty)$). Note that the fact that the square function $S^{2r}_a$ is only integrated over some enlargement of $\Delta_d$ instead of the whole ${\mathbb R}^{n-1}$ follows from the fact that the set $\{x'\in{\BBR}^{n-1}:\,\tilde{N}_a(\nabla u\chi_{T(\Delta_d)})(x')>\nu/32\big\}$ on the righthand side of \eqref{eq:gl2} vanishes outside a ball of diameter comparable to $\Delta_d$. For this reason the maximal operators $M$ in \eqref{eq:gl2} can be restricted to such enlarged ball $\Delta_{md}$. The condition $\|\mu\|_{\mathcal C}< K$ comes from the presence of the term\newline $(M(\|\mu\|_{\mathcal C}^{1/2}\tilde{N}^2_a(\nabla u\chi_{T(\Delta_d)})))^{1/2}\le\gamma\nu$ in the good-$\lambda$ inequality. The argument that shows \eqref{S3:C7:E00ooloc} for all $p>0$ can be found in \cite{FSt}. The local estimate \eqref{S3:C7:E00ooloc} for $p>2$ is the necessary ingredient for what is otherwise a purely real variable argument. Further details can be found in \cite{FSt}. Finally taking the limit $d\to\infty$ yields \eqref{S3:C7:E00oo}. The additional assumption $\|\tilde{N}_a(\nabla u)\|_{L^{p}({\BBR}^{n-1})}<\infty$ when $p< 2$ comes into play it order to guarantee that the term $d^{(n-1)/p}|\widetilde{\nabla u}(A_d)|$ in \eqref{S3:C7:E00ooloc} converges to zero as $d\to\infty$. \end{proof} \section{Estimates for the $p$-adapted square function.} \label{S4} Let $\Omega=\BBR^n_+$ and assume $u$ is a weak solution ${\mathcal L}u=0$ where \begin{equation} \mathcal Lu = \partial_{i}\left(A_{ij}(x)\partial_{j}u\right) +B_{i}(x)\partial_{i}u\label{eq-zoncolan} \end{equation} with the Dirichlet boundary datum $f\in \dot{B}^{2,2}_{1/2}(\partial\Omega;{\BBC}) \cap \dot{W}^{1,p}(\partial \Omega;{\BBC})$. Assume that $A$ is $p$-elliptic and smooth in $\BBR^n_+$ with $A_{00} =1$ and $A_{0j}$ real and that the measure $\mu$ defined as in \eqref{Car_hatAA} is Carleson. \begin{comment} Then there exists a constant $C=C(\lambda_p,\Lambda,p,n)$ such that for all $r>0$ \begin{align}\label{S3:L4:E00} &\hskip 0.20in \int_{\partial\Omega} \left[S^{r/2}_p(\nabla_T u)\right]^p\,dx'= \sum_{k=1}^{n-1}\iint_{[0,r/2]\times\partial\Omega}|\nabla \partial_ku|^{2}|\partial_k u|^{p-2}x_0\,dx'\,d x_0 \nonumber\\[4pt] & \leq C\left(\int_{\partial\Omega}|\nabla_T u(0,x')|^{p}\,dx'+\int_{\partial\Omega}|\nabla_T u(r,x')|^{p}\,dx' +\|\mu\|_{\mathcal{C}}\int_{\partial\Omega}\left[\tilde{N}^{r}_{p,a}(\nabla u)\right]^{p}\,dx'\right). \end{align} Furthermore, under the same assumptions, if $g:{\mathbb R}^{n-1}\to{\mathbb R}^+$ is a Lipschitz function with small Lipschitz norm for any $\Delta\subset{\mathbb R}^{n-1}$ such that $\sup_\Delta g\le d/2$ where $d=\mbox{diam}(\Delta)$ we also have the following local estimate \begin{equation}\label{eq5.15} \iint_{\Omega_g\cap T(\Delta)}|\nabla w_k|^{2}|u|^{p-2}\delta_g(x)\,dx \leq C\int_{2\Delta}\left(|w_k(g(x'),x')|^p+(1+\|\mu\|_{\mathcal{C}})\left[\tilde{N}^{2d}_{p,a,g}(w_k)\right]^{p}\right)\,dx'. \end{equation} Here $\tilde{N}^{2d}_{p,a,g}$ is the truncated version of the nontangential maximal function defined in \eqref{eq-Nh} with respect to the domain $\Omega_g=\{x_0>g(x')\}$ and $\delta_g$ measures the distance of a point to the boundary of $\Omega_g$. \end{lemma} \begin{lemma}\label{S5:C3} Under the assumptions of Lemma \ref{S3:L4}, for any energy solution $u$ of \eqref{E:D} we have for any $x' \in\mathbb R^{n-1}$ and $r>0$ \begin{equation}\label{eq5.15} \int_{B_r(x')}\left[S_p^{r/2}(\nabla_T u)\right]^p\,dx' \leq C_p(1+\|\mu\|_{\mathcal{C}})\int_{B_{2r}(x')}\left[\tilde{N}^r_{p}(\nabla u)\right]^{p}\,dx'. \end{equation} \end{lemma} \end{comment} Fix an arbitrary $y'\in\partial\Omega\equiv{\mathbb{R}}^{n-1}$ and consider $\Delta=\Delta_r(y)$; a ball of radius $r$ in ${\mathbb{R}}^{n-1}$ centred at $y'$. Pick a smooth cutoff function $\zeta$ which is $x_0-$independent and satisfies \begin{equation}\label{cutoff-F} \zeta= \begin{cases} 1 & \text{ in } \Delta, \\ 0 & \text{ outside } 2\Delta, \end{cases} \end{equation} where $2\Delta$ is a ball of radius $2r$ centered at $y'$. Moreover, assume that $r|\nabla \zeta| \leq c$ for some positive constant $c$ independent of $y'$. We note that since $$\partial_0(A_{0j}\partial_ju)=\partial_j(A_{0j}\partial_0u)-(\partial_jA_{0j})\partial_0u+(\partial_0A_{0j})\partial_ju,$$ we may as well assume that $A_{0j}=0$, $j>0$ by changing coefficients $A_{0j}$ and $A_{j0}$ of the matrix $A$ and modifying $B$. We note that this does not affect ellipticity of $A$ as all $A_{0j}$ are assumed to be real. It follows that, we can assume $\partial_k A_{0j}=0$ for all $j,k=0,1,\dots,n-1$. We begin by considering the integral quantity for some function $w$ (to be specified later) such that $w|w|^{p/2-1}\in W^{1,2}_{loc}(\Omega)$ \begin{equation}\label{A00} \mathcal{I}:=\mathscr{R}e\,\iint_{[0,s]\times 2\Delta}A_{ij}\partial_{j}w \partial_{i}(|w|^{p-2}\overline{w})x_0\zeta\,dx'\,dx_0 \end{equation} with the usual summation convention understood. Here $s\in [0,r]$. With $\chi=x_0\zeta$ we have by Theorem 2.4 of \cite{DPcplx} for all $p$ for which $A$ is $p$-elliptic for some $\lambda_p>0$ \begin{equation}\label{cutoff-AA} \mathcal{I}\geq{\lambda_p}\iint_{[0,s]\times 2\Delta}|w|^{p-2}|\nabla w|^2 x_0\zeta\,dx'\,dx_0. \end{equation} The objective is to ultimately apply (\ref{cutoff-AA}) to $w=\partial_i u$, $i=1,...n-1$ and obtain a quantity that can be bounded from above by expressions that involve $L^p$ norms of $|\nabla u|$, and nontangential maximal functions of $|\nabla u|$, on the boundary. To see this, we continue the calculation using the fact that we can bound the right hand side of (\ref{cutoff-AA}) by the expression $\mathcal{I}$ which brings in the equation. For the moment, we ignore the issue of finiteness of this expression, even though we use this fact in the calculations that follow. We'll return to this point after some of the basic calculations, for the sake of clarity of exposition. The idea now is to integrate by parts the formula for $\mathcal I$ in order to relocate the $\partial_i$ derivative. This gives \begin{align}\label{I+...+IV} \mathcal{I} &= \mathscr{R}e\,\int_{\partial\left[(0,s)\times 2\Delta\right]} A_{ij}\partial_{j}w|w|^{p-2}\overline{w}x_0\zeta\nu_{x_i}\,d\sigma \nonumber\\[4pt] &\quad - \mathscr{R}e\,\iint_{[0,s]\times 2\Delta}\partial_{i}\left(A_{ij} \partial_{j}w\right)|w|^{p-2}\overline{w}x_0\zeta\,dx'\,dx_0 \nonumber\\[4pt] &\quad - \mathscr{R}e\,\iint_{[0,s]\times 2\Delta}A_{ij}\partial_{j}{w}|w|^{p-2}\overline{w}\partial_{i}x_0\zeta\,dx'\,dx_0 \nonumber\\[4pt] &\quad - \mathscr{R}e\,\iint_{[0,s]\times 2\Delta}A_{ij}\partial_{j}w|w|^{p-2}\overline{w}x_0\partial_{i}\zeta\,dx'\,dx_0 \nonumber\\[4pt] &=:I+II+III+IV, \end{align} where $\nu$ is the outer unit normal vector to $(0,s)\times 2\Delta$. The boundary term $I$ does not vanish only on the set $\{s\}\times 2\Delta$ and only when $i=0$. This gives \begin{equation}\label{cutoff-BBB} I=\mathscr{R}e\,\int_{\{s\}\times 2\Delta} A_{0j}\partial_{j}w|w|^{p-2}\overline{w}x_0\zeta\,d\sigma \end{equation} As $\partial_ix_0=0$ for $i>0$ the term $III$ is non-vanishing only for $i=0$. Since $A_{0j}=0$ for $j>0$ and $A_{00} =1$ term $III$ simplifies to \begin{align}\label{u6fF} III &=-\mathscr{R}e\,\iint_{[0,s]\times 2\Delta}\partial_{0}{w}|w|^{p-2}\overline{w}\zeta\,dx'\,dx_0 \nonumber\\ &=-\frac1p\iint_{[0,s]\times 2\Delta} \partial_{0}(|w|^{p})\zeta\,dx'\,dx_0 \\ &=-\frac{1}{p}\int_{2\Delta} |w|^p(s,x')\zeta\,dx' + \frac{1}{p}\int_{2\Delta} |w|^p(0,x')\zeta\,dx' \nonumber \end{align} We add up all terms we have so far to obtain \begin{equation}\label{square01}\begin{split} \mathcal I &\leq p^{-1}\int_{2\Delta} \partial_{0}(|w|^p)(s,x') s \zeta \,dx' - \mathscr{R}e\,\iint_{[0,s]\times 2\Delta}\partial_{i}\left(A_{ij} \partial_{j}w\right)|w|^{p-2}\overline{w}x_0\zeta\,dx'\,dx_0 \\ &\quad - {p}^{-1}\int_{2\Delta} |w|^p(s,x')\zeta\,dx' + {p}^{-1}\int_{2\Delta} |w|^p(0,x')\zeta\,dx' + IV. \end{split}\end{equation} So far $w$ was an arbitrary function. We now apply \eqref{square01} to $w_k=\partial_k u$, $k=1,2,\dots,n-1$ where $u$ solves ${\mathcal L}u=0$. It follows that each $w_k$ solves \begin{equation}\label{eq-partial} {\mathcal L}w_k=\partial_i(A_{ij}\partial_jw_k)+B_iw_k=\partial_i((\partial_k A_{ij})w_j)-\partial_k(B_i)w_i. \end{equation} It follows that \begin{equation}\label{eq-II}\begin{split} II&=- \mathscr{R}e\,\iint_{[0,s]\times 2\Delta}\partial_{i}\left(A_{ij} \partial_{j}w_k\right)|w_k|^{p-2}\overline{w_k}x_0\zeta\,dx'\,dx_0 \\ &=\mathscr{R}e\,\iint_{[0,s]\times 2\Delta}B_iw_k|w_k|^{p-2}\overline{w_k}x_0\zeta\,dx'\,dx_0\\ &+\mathscr{R}e\,\iint_{[0,s]\times 2\Delta}(\partial_iA_{ij})w_j|w_k|^{p-2}\overline{w_k}x_0\partial_k\zeta\,dx'\,dx_0\\ &-\mathscr{R}e\,\iint_{[0,s]\times 2\Delta}B_iw_i|w_k|^{p-2}\overline{w_k}x_0\partial_k\zeta\,dx'\,dx_0\\ &+\mathscr{R}e\,\iint_{[0,s]\times 2\Delta}(\partial_iA_{ij})w_j\partial_k(|w_k|^{p-2}\overline{w_k})x_0\zeta\,dx'\,dx_0\\ &-\mathscr{R}e\,\iint_{[0,s]\times 2\Delta}(\partial_kA_{ij})(\partial_iw_j)|w_k|^{p-2}\overline{w_k}x_0\zeta\,dx'\,dx_0\\ &-\mathscr{R}e\,\iint_{[0,s]\times 2\Delta}B_i\partial_k(w_i|w_k|^{p-2}\overline{w_k})x_0\zeta\,dx'\,dx_0. \end{split}\end{equation} Here we integrated by parts terms containing two derivatives of $A$ or one derivative of $B$ by moving $\partial_k$ derivative. It is important here that $k\ne 0$ and hence $\partial_kx_0=0$. The first term on the righthand side can be estimated directly using Theorem~\ref{T:Car} while the last three terms we estimate using Cauchy-Schwarz inequality, the Carleson conditions for $A$ and $B$ and Theorem~\ref{T:Car} \begin{align}\label{TWO-TWO} |II_4|+|II_{5}|+|II_6| &\leq\left(\iint_{[0,s]\times 2\Delta}\left(|\nabla A|+|B|\right)^{2} |w|^{p} x_0\zeta\,dx'\,dx_0\right)^{1/2} \cdot\nonumber\\ &\qquad\left(\iint_{[0,s]\times 2\Delta}|w|^{p-2}|\nabla w|^{2}x_0\zeta\,dx'\,dx_0\right)^{1/2} \nonumber\\[4pt] &\leq C(\lambda_p,\Lambda,p,n)\left(\|\mu\|_{\mathcal{C}}\int_{2\Delta} \left[\tilde{N}^r_{p,a}(w)\right]^{p}\,dx'\right)^{1/2}\cdot\mathcal{I}^{1/2}. \end{align} In summary we get (after using AG-inequality) $$|II|\le C(\lambda_p,\Lambda,p,n)\|\mu\|_{\mathcal{C}}\int_{2\Delta} \left[\tilde{N}^r_{p,a}(\nabla u)\right]^{p}\,dx' +\frac12\mathcal{I}+II_{2}+II_3.$$ It follows that \eqref{square01} simplifies to (after summing over $k=1,2,\dots n-1$) \begin{equation}\label{square01aa}\begin{split} \sum_{k=1}^{n-1}\mathcal I_k &\leq p^{-1}\int_{2\Delta} \partial_{0}(|\nabla_T u|^p)(s,x') r \zeta \,dx' \\ &\hskip-1cm - {p}^{-1}\int_{2\Delta} |\nabla_T u|^p(s,x')\zeta\,dx' + {p}^{-1}\int_{2\Delta} |\nabla_T u|^p(0,x')\zeta\,dx' \\ &\hskip-1cm+ C(\lambda_p,\Lambda,p,n) \|\mu\|_{\mathcal{C}} \int_{2\Delta} \left[\tilde{N}^{r}_{p,a}(\nabla u)\right]^p \,dx' +II_{2}+II_3 + IV. \end{split}\end{equation} We estimate the terms $IV$. It can be bounded (up to a constant) by \begin{equation}\label{eq5.16} \iint_{[0,s]\times 2\Delta}|\nabla w||w|^{p-1}x_0|\partial_T\zeta|dx'dx_0, \end{equation} where $\partial_T\zeta$ denotes any of the derivatives in the direction parallel to the boundary. Recall that $\zeta$ is a smooth cutoff function equal to $1$ on $\Delta$ and $0$ outside $2\Delta$. In particular, we may assume $\zeta$ to be of the form $\zeta=\eta^2$ for another smooth function $\eta$ such that $|\nabla_T\eta|\le C/r$. By Cauchy-Schwarz \eqref{eq5.16} can be further estimated by \begin{equation}\label{eq5.17} \left(\iint_{[0,s]\times 2\Delta}|\nabla w|^2|w|^{p-2}x_0(\eta)^2dx'dx_0\right)^{1/2}\left(\iint_{[0,s]\times 2\Delta}|w|^{p}x_0|\nabla_T\eta|^2dx'dx_0\right)^{1/2} \end{equation} \begin{equation} \lesssim{\mathcal I}^{1/2}\left(\frac1r\iint_{[0,s]\times (2\Delta\setminus\Delta)}|w|^pdx'dx_0\right)^{1/2}\le \varepsilon{\mathcal I}+C_\varepsilon\int_{2\Delta\setminus\Delta}\left[\tilde{N}^r_{p,a}(\nabla u)\right]^{p}\,dx'.\nonumber \end{equation} In the last step we have used AG-inequality and a trivial estimate of the solid integral $|u|^p$ by the $p$-averaged nontangential maximal function. Terms $II_2$ and $II_3$ are also similar. We use $|\nabla A|,|B|\le \|\mu|^{1/2}_{\mathcal C}/x_0$ and what remains has a trivial estimate by $\int_{2\Delta\setminus\Delta}\left[\tilde{N}^r_{p,a}(\nabla u)\right]^{p}\,dx$. Substituting this and \eqref{eq5.17} into \eqref{square01aa} and by integrating in $s$ over $[0,r]$ and dividing by $r$ we finally obtain \begin{align}\label{square02aa-loc} &\hskip -0.20in \iint_{\Delta}\left[S^{r/2}_p(\nabla_T u)\right]^p\,dx'\le \nonumber\\[4pt] &\hskip -0.20in 2\sum_{k=1}^{n-1}\iint_{[0,r]\times\Delta}|\nabla(\partial_k u)|^2|\partial_ku|^{p-2}\,x_0(1-{\textstyle\frac{x_0}{r}})\,dx'\,dx_0\lesssim \nonumber\\[4pt] &\quad + {p}^{-1}\int_{2\Delta} |\nabla_T u|^p(0,x')\,dx' + {p}^{-1}\int_{2\Delta} |\nabla_T u|^p(r,x')\,dx' \nonumber\\[4pt] &\quad +C\|\mu\|_{\mathcal{C}}\int_{2\Delta}\left[\tilde{N}^{r}_{p,a}(\nabla u)\right]^p\,dx'+C\int_{2\Delta\setminus\Delta}\left[\tilde{N}^r_{p,a}(\nabla u)\right]^{p}\,dx'. \end{align} We return now to the issue of finiteness of the quantities in \ref{cutoff-AA}. We fix an $\varepsilon > 0$ and consider a bound for the expression \begin{equation}\label{eps} \iint_{[\varepsilon,s]\times 2\Delta}|w|^{p-2}|\nabla w|^2 (x_0-\varepsilon)\zeta\,dx'\,dx_0 \end{equation} instead of $\iint_{[0,s]\times 2\Delta}|w|^{p-2}|\nabla w|^2 x_0\zeta\,dx'\,dx_0$. The quantity \ref{eps} is finite by the interior estimates \eqref{RHthm2}. By Theorem \ref{T:Car}, all of the previous calculations for the term (\ref{eps}), after averaging in $s$ will give the following estimate: \begin{align} &\hskip -0.20in \sum_{k=1}^{n-1}\iint_{[\varepsilon,r/2]\times\Delta}|\nabla(\partial_k u)|^2|\partial_ku|^{p-2}\,(x_0-\varepsilon))\,dx'\,dx_0\lesssim \nonumber\\[4pt] &\quad + {p}^{-1}\int_{2\Delta} |\nabla_T u|^p(\varepsilon,x')\,dx' + {p}^{-1}\int_{2\Delta} |\nabla_T u|^p(r,x')\,dx' \nonumber\\[4pt] &\quad +C\|\mu\|_{\mathcal{C}}\int_{2\Delta}\left[\tilde{N}^{r}_{p,a,\varepsilon}(\nabla u)\right]^p\,dx'+C\int_{2\Delta\setminus\Delta} \left[\tilde{N}^r_{p,a,\varepsilon}(\nabla u)\right]^{p}\,dx'. \end{align} where $\tilde{N}^r_{p,a,\varepsilon}(\nabla u)$ denotes the nontangential maximal function relative to the domain $\{x_0 > \varepsilon\}$ as defined in \eqref{TFC-6x}. To deal with the quantity $\int_{2\Delta} |\nabla_T u|^p(\varepsilon,x')\,dx' $, which is not itself a priori finite, we average the inequalities above over $\varepsilon \in [\varepsilon_0/2, \varepsilon_0]$. Averaging in $r$ as we have done earlier bounds the boundary integral $\int_{2\Delta} |\nabla_T u|^p(r,x')\,dx'$ by a solid integral and we obtain, for each $k = 1,...n-1$, \begin{equation} \iint_{[\varepsilon_0,r/4]\times\Delta}|\nabla(\partial_k u)|^2|\partial_ku|^{p-2}\,(x_0-\varepsilon_0)\,dx'\,dx_0 \lesssim C\int_{2\Delta} \left[\tilde{N}^r_{p,a,\varepsilon_0/2}(\nabla u)\right]^{p}\,dx'. \end{equation} \hskip -0.20in By Fatou's lemma, letting $\varepsilon_0 \rightarrow 0$, the expressions in (\ref{cutoff-AA}) have an upper bound in terms of $\int_{2\Delta}\left[\tilde{N}^{r}_{p,a}(\nabla u)\right]^p\,dx$. Whenever this nontangential maximal function expression is finite, the calculations leading to (\ref{square02aa-loc}) that depend on the finiteness of (\ref{cutoff-AA}) are justified. To obtain a global version of \eqref{square02aa-loc}, consider a sequence of disjoint boundary balls $(\Delta_r(y'_j))_{k\in\mathbb N}$ such that $\cup_{j}\Delta_{2r}(y'_j) $ covers $\partial\Omega={\BBR}^{n-1}$ and consider a partition of unity $(\zeta_{j})_{k\in\mathbb N}$ subordinate to this cover. That is, assume $\sum_j \zeta_{j} = 1$ on ${\BBR}^{n-1}$ and each $\zeta_{j}$ is supported in $\Delta_{2r}(y'_j)$. Given that $\sum_j \partial_i\zeta_{j} = 0$ for each $i$, it follows by summing over all $k$ that $$\sum_{j} II_{2}+II_3 + IV= 0.$$ It follows from \eqref{square01aa} (after averaging in $s$ over $[0,r]$) \begin{align}\label{square02aa} &\hskip -0.20in \iint_{{\BBR}^{n-1}}\left[S^{r/2}_p(\nabla_T u)\right]^p\,dx'\le \nonumber\\[4pt] &\hskip -0.20in 2\sum_{k=1}^{n-1}\iint_{[0,r]\times{\BBR}^{n-1}}|\nabla(\partial_k u)|^2|\partial_ku|^{p-2}\,x_0(1-{\textstyle\frac{x_0}{r}})\,dx'\,dx_0\lesssim \nonumber\\[4pt] &\quad + {p}^{-1}\int_{{\BBR}^{n-1}} |\nabla_T u|^p(0,x')\,dx' + {p}^{-1}\int_{{\BBR}^{n-1}} |\nabla_T u|^p(r,x')\,dx' \nonumber\\[4pt] &\quad +C\|\mu\|_{\mathcal{C}}\int_{{\BBR}^{n-1}}\left[\tilde{N}^{r}_{p,a}(\nabla u)\right]^p\,dx'. \end{align} We now modify our calculation above by considering a Lipschitz function $g:{\mathbb R}^{n-1}\to [0,\infty)$ such that $\sup_{2\Delta} g \le r/4$ (we only assume this to avoid integration over an empty set). We perform the same calculation starting from \eqref{A00} but this time we integrate over the set $$([0,s]\times 2\Delta)\cap\Omega_g,$$ where $$\Omega_g:=\{(x_0,x')\in\mathbb R\times{\mathbb R}^{n-1}:\, x_0>g(x')\}.$$ Rather that repeating the whole calculation again we focus on the differences. We note that we will only consider $s\in[r/2,2r]$ to avoid complications that might arise from integration over empty sets. The first difference will be that the term $I$ of \eqref{I+...+IV} will contain an additional boundary and hence \begin{align}\label{cutoff-BBBv2} I&=\mathscr{R}e\,\int_{\{s\}\times 2\Delta} (\partial_{0}w)|w|^{p-2}\overline{w}x_0\zeta\,d\sigma\\&+ \mathscr{R}e\,\int_{([0,s]\times2\Delta)\cap\partial\Omega_g} A_{ij}\partial_{j}w|w|^{p-2}\overline{w}x_0\zeta\nu_i\,d\sigma,\nonumber \end{align} where $\nu_i$ is the $i$-component of the outer normal of $\partial\Omega_g$. Term \eqref{u6fF} becomes \begin{align}\label{u6fFv2} III &=-\frac{1}{p}\int_{2\Delta} |w|^p(s,x')\zeta\,dx' + \frac{1}{p}\int_{2\Delta} |w|^p(g(x'),x')\zeta\,dx' . \end{align} We look at the term $II$. As we integrate by parts to obtain \eqref{eq-II} we pick up two extra boundary terms. \begin{align}\label{eq-IIbd} II_{bd} =&- \mathscr{R}e\,\int_{([0,s]\times 2\Delta)\cap\partial\Omega_g}(\partial_iA_{ij})w_j|w_k|^{p-2}\overline{w_k}x_0\nu_k\zeta\,d\sigma\\ &+\mathscr{R}e\,\int_{([0,s]\times 2\Delta)\cap\partial\Omega_g}B_iw_i|w_k|^{p-2}\overline{w_k}x_0\nu_k\zeta\,d\sigma. \nonumber \end{align} We also modify some estimates. Terms $II_5$, $II_6$ and $II_7$ of \eqref{eq-II} are now integrated over the set $([0,s]\times 2\Delta)\cap\Omega_g$ which allow us to use the estimate \eqref{Ca-222-x} of Theorem~\ref{T:Car}. This gives us \begin{equation} |II_5|+|II_6|+|II_7|\lesssim\left(\|\mu\|_{\mathcal{C}}\int_{([0,s]\times 2\Delta)\cap\partial\Omega_g} \left[\tilde{N}^{2r}_{p,a,g}(\nabla u)\right]^{p}\,dx'\right)^{1/2}\cdot\mathcal{I}^{1/2}. \end{equation} Similar observation applies to terms $II_2$, $II_3$ and $IV$. It follows that what we have so far implies the estimate for some $c_p>0$: \begin{equation}\label{square01aav2}\begin{split} &\hskip-0.5cmc_p\sum_{k=1}^{n-1}\mathcal \iint_{([0,s]\times 2\Delta)\cap\Omega_g}|\nabla(\partial_k u)|^2|\partial_k u|^{p-2}x_0\zeta\,dx'dx_0\\ &\hskip-1cm \leq\, p^{-1}\int_{2\Delta} \partial_{0}(|\nabla_T u|^p)(s,x') r \zeta \,dx' \\ &\hskip-1cm - {p}^{-1}\int_{2\Delta} |\nabla_T u|^p(s,x')\zeta\,dx' + {p}^{-1}\int_{2\Delta} |\nabla_T u|^p(g(x'),x')\zeta\,dx' \\ &\hskip-1cm+ C(\lambda_p,\Lambda,\|\mu\|_{\mathcal{C}},p,n) \int_{T(2\Delta)\times\partial\Omega_g} \left[\tilde{N}^{2r}_{p,a,g}(\nabla u)\right]^p \,dx'\\ &\hskip-1cm+ \sum_{k=1}^{n-1}\mathscr{R}e\,\int_{([0,s]\times2\Delta)\cap\partial\Omega_g} A_{ij}\partial_{j}(\partial_k u)|\partial_k u|^{p-2}\overline{\partial_k u}\,x_0\zeta\nu_i\,d\sigma+II_{bd}. \end{split}\end{equation} Our goal is to estimate the first two terms on the right-hand side of \eqref{square01aav2} by $\tilde{N}^{2r}_{p,a,g}$. To do that we average in $s$ twice. We first integrate \eqref{square01aav2} over an interval $s\in[r/2(1+\theta), r(1+\theta)]$ and then integrate the resulting inequality again in $\theta\in[0,1]$. This turns both mentioned terms into solid integrals of $|\nabla_T u|^p$ over a Whitney-type box inside $\Omega_g$. This simplifies \eqref{square01aav2} to \begin{equation}\label{square01aav3}\begin{split} &\hskip-0.5cm c_p\sum_{k=1}^{n-1}\mathcal \iint_{([0,r/2]\times \Delta)\cap\Omega_g}|\nabla(\partial_k u)|^2|\partial_k u|^{p-2}x_0\,dx'dx_0\\ &\hskip-1cm\le {p}^{-1}\int_{2\Delta} |\nabla_T u|^p(g(x'),x')\,dx'\\ &\hskip-1cm+ C(\lambda_p,\Lambda,\|\mu\|_{\mathcal{C}},p,n) \int_{T(2\Delta)\times\partial\Omega_g} \left[\tilde{N}^{2r}_{p,a,g}(\nabla u)\right]^p \,dx'\\ &\hskip-1cm+ \sum_{k=1}^{n-1}\mathscr{R}e\,\int_{([0,s]\times2\Delta)\cap\partial\Omega_g} A_{ij}\partial_{j}(\partial_k u)|\partial_k u|^{p-2}\overline{\partial_k u}\,x_0\zeta\nu_i\,d\sigma+II_{bd}. \end{split}\end{equation} We shall use \eqref{square01aav3} in the following Lemma. \begin{lemma}\label{LGL2} Let $\Omega=\BBR^n_+$ and assume $u$ be the energy solution of \eqref{eq-zoncolan}. Assume that $A$ is $p$-elliptic and smooth in $\BBR^n_+$ with $A_{00} =1$ and $A_{0j}$ real and that the measure $\mu$ defined as in \eqref{Car_hatAA} is Carleson. Consider any $b>a>0$. Then for each $\gamma\in(0,1)$ there exists a constant $C(\gamma)>0$ such that $C(\gamma,a,b)\to 0$ as $\gamma\to 0$ and with the property that for each $\nu>0$ we have \begin{align}\label{eq:gl2aa} &\hskip -0.20in \left|\Big\{x'\in {\BBR}^{n-1}:\,S_{p,a}(\nabla_Tu)(x')>\nu,\, \tilde{N}_b(\nabla u)(x')\le\gamma \nu\Big\}\right| \nonumber\\[4pt] &\hskip 0.50in \quad\le C(\gamma)\left|\big\{x'\in{\BBR}^{n-1}:\,{S}_{p,b}(\nabla_Tu)(x')>\nu/2\big\}\right|. \end{align} Here $\tilde{N}_b$ denotes the $L^2$ version of the nontangential maximal function defined over cones of aperture $b$. \end{lemma} \begin{proof} We observe that $\tilde{N}_b(\nabla u)\le \gamma\nu$ also implies $\tilde{N}_{p,b}(\nabla u)\lesssim\gamma\nu$ thanks to Proposition \ref{Regularity}. Also clearly, $\big\{x'\in{\BBR}^{n-1}:\,{S}_{p,b}(\nabla_T u)(x')>\nu/2\}$ is an open subset of ${\BBR}^{n-1}$. When this set is empty, or is all of ${\BBR}^{n-1}$, estimate \eqref{eq:gl2aa} is trivial, so we focus on the case when the set in question is both nonempty and proper. Granted this, we may consider a Whitney decomposition $(\Delta_i)_{i\in I}$ of it, consisting of open cubes in ${\mathbb{R}}^{n-1}$. Let $F_\nu^i$ be the set appearing on the left-hand side of \eqref{eq:gl2aa} intersected with $\Delta_i$. Let $r_i$ be the diameter of $\Delta_i$. Due to the nature of the Whitney decomposition there exists a point $p'\in 2\Delta_i$ such that ${S}_{p,b}(\nabla_T u)(p')<\nu/2$. From this and the fact that $b>a$ it follows that for all $x'\in F^i_\nu$ we have $$S^{d}_{p,a}(\nabla_T u)(x')>\nu/2,$$ where $S^{d}_{p,a}$ is the truncated version of the square function at some height $d\approx r_i$, where the precise nature of relation between $d$ and $r_i$ depends on the apertures $a$ and $b$. For some $a<c<b$ consider the domain $$\Omega_c=\bigcup_{x'\in F^i_\nu} \Gamma_c(x');$$ this is a Lipschitz domain with Lipschitz constant $1/c$. Observe that $F^i_\nu\subset \partial\Omega_c$. It follows that $$|F^i_\nu|\le \frac{2^p}{\nu^p}\int_{F^i_\nu}\left[S^{d}_{p,a}(\nabla_Tu)(x')\right]^p\,dx'\lesssim \nu^{-p} \sum_{k=1}^{n-1}\mathcal \iint_{{\Omega_c\cap T(\Delta_i)}}|\nabla(\partial_k u)|^2|\partial_k u|^{p-2}x_0\,dx.$$ We apply \eqref{square01aav3}. It follows that \begin{align}\label{eq-zonc2} |F^i_\nu|&\lesssim \nu^{-p}\Big\{\int_{\partial\Omega_c\cap T(2\Delta_i)}\left(\left|\nabla_T u\big|_{\partial\Omega_c}\right|^p+\left[\tilde{N}^{2d}_{p,a,c}(\nabla u)\right]^{p}\right)\,d\sigma\\ &+ \sum_{k=1}^{n-1}\Big[\mathscr{R}e\,\int_{T((2\Delta_i)\cap\partial\Omega_c} A_{ij}(\partial^2_{jk} u)|\partial_k u|^{p-2}\overline{\partial_k u}\,x_0\zeta\nu_i\,d\sigma\nonumber\\ &+\mathscr{R}e\,\int_{([0,s]\times 2\Delta)\cap\partial\Omega_g}(\partial_iA_{ij})\partial_j u| \partial_k u|^{p-2}\overline{\partial_k u}\,x_0\nu_k\zeta\,d\sigma\nonumber\\ &+\mathscr{R}e\,\int_{([0,s]\times 2\Delta)\cap\partial\Omega_g}B_i\partial_i u|\partial_k u|^{p-2}\overline{\partial_k u}\,x_0\nu_k\zeta\,d\sigma\Big]\Big\}, \nonumber \end{align} where $\tilde{N}^{2d}_{p,a,c}$ is defined using nontangential cones with aperture $a$ with vertices on $\partial\Omega_c$. Due to the fact that each of these cones is contained in one of the cones $\Gamma_b(x')$ for some $x'\in F^i_\nu$ (as $c<b$) and on $F^i_\nu$: $\tilde{N}_b(\nabla_Tu)(x')\le\gamma \nu$ we also have $\tilde{N}^{2d}_{p,a,c}(\nabla_Tu)\lesssim\gamma\nu$ everywhere on $\partial\Omega_c$. This takes care of the second term. We still need to deal with the four other terms on the righthand side. We do it by converting these terms into a solid integrals by averaging $c$ over the interval $[a,(a+b)/2]$. Let us denote by $$\mathcal O=\Omega_{(a+b)/2}\setminus\overline{\Omega_a}.$$ ${\mathcal O}$ is the set over which the four terms we want to bound will integrate over after the averaging. The sets ${\Omega_c}$ also share $F^i_\nu$ as a common boundary, however there we have a trivial estimate $$\int_{F^i_\nu}\left|\nabla_T u\big|_{\partial\Omega_c}\right|^pd\sigma\le (\gamma\nu)^p|\Delta_i|,$$ from the fact that $\tilde{N}_b(\nabla_Tu)(x')\le\gamma \nu$ on $F^i_\nu$, while the last three terms of \eqref{eq-zonc2} vanish there (as $x_0=0$). Given the way the set $\mathcal O$ is defined geometric considerations imply that it can be covered by a non-overlapping collection of Whitney cubes $\{Q_j\}$ in ${\mathbb R}^n_+$ with the following properties: \begin{equation} {\mathcal O}\subset\bigcup_j Q_j,\qquad r_j=\mbox{diam}(Q_j)\approx\mbox{dist}(Q_i,\partial{\mathbb R}^n_+),\quad 2Q_j\subset \Omega_b. \end{equation} Furthermore the projections of $Q_j$ onto the boundary ${\mathbb R}^{n-1}$ are \lq\lq almost disjoint"; that is each such projection overlaps with at most $K=K(a,b)$ other projections. From this $\sum_j \mbox{diam}(Q_j)^{n-1}\approx |2\Delta_i|$. Consider the contribution of the first term on the right-hand side of \eqref{eq-zonc2} after the averaging in $c$ on each $Q_j$. Such term can be bounded by $$(\mbox{diam}(Q_j))^{-1}\iint_{Q_j}|\nabla u|^p dx\lesssim (\gamma\nu)^p \mbox{diam}(Q_j)^{n-1},$$ where the bound $\lesssim (\gamma\nu)^p$ comes from the fact that $Q_j\subset \Omega_b$ and hence the $L^p$ average of $\nabla u$ on $Q_j$ has this bound from our assumptions. Summing over all $Q_j$ gives us the bound $$\sum_j(\mbox{diam}(Q_j))^{-1}\iint_{Q_j}|\nabla u|^p dx\lesssim (\gamma\nu)^p |2\Delta_i|.$$ In fact, we have this bound also for the fourth and fifth term on the right-hand side of \eqref{eq-zonc2} since $|\nu_k|\le 1$ and $|\nabla A|x_0, |B|x_0\lesssim\|\mu\|_{\mathcal C}^{1/2}$ and hence we are again dealing with a solid integral of $|\nabla u|^p$ over each $Q_j$. Finally, the third term on the right-hand side of \eqref{eq-zonc2} is somewhat different and on $Q_j$ has the bound by $$(\mbox{diam}(Q_j))^{-1}\iint_{Q_j}|\partial_k u|^{p-1}|\nabla \partial_k u|x_0 dx$$ which since $x_0\approx \mbox{diam}(Q_j)$ is further bounded by Cauchy-Schwarz $$\lesssim (\mbox{diam}(Q_j))^{-1}\left(\iint_{Q_j}|\partial_k u|^{p} dx\right)^{1/2}\left((\mbox{diam}(Q_j))^2\iint_{Q_j}|\nabla\partial_k u|^2|\partial_k u|^{p-2} dx\right)^{1/2}$$ where for the second term we can use \eqref{RHthm2} to again get bound of the whole expression by $C\mbox{diam}(Q_j)^{n-1}$. It follows we have after the averaging procedure for every term of \eqref{eq-zonc2} the same bound (up to a constant) and that $$|F^i_\nu|\le C(a,b,\|\mu\|_{\mathcal C})\gamma^p|\Delta_i|.$$ Summing over all $i$ yields \eqref{eq:gl2aa} as desired. \end{proof} We will require a localized version of Lemma \ref{LGL2} as well. \begin{lemma}\label{LocalGoodL} Let $u$ be as in Lemma \ref{LGL2}. Fix $R \geq h$ and consider a boundary ball $\Delta_R \subset {\BBR}^{n-1}$. Let $p\geq q > 1$ for any $q$ such that $A$ is $q$-elliptic. Let $$\nu_0^p = C \Xint-_{\Delta_{2R}} \left[N_b^{2R}(\nabla u)\right]^p dx',$$ where $C$ is a constant depending only on dimension (calculated in the proof below). Then for each $\gamma\in(0,1)$ there exists a constant $C(\gamma)>0$ such that $C(\gamma,a,b)\to 0$ as $\gamma\to 0$ and with the property that for each $\nu>\nu_0$ \begin{align}\label{eq:gl2-2} &\hskip -0.20in \left|\Big\{x'\in \Delta_R :\,S^R_{q,a}(\nabla_Tu)(x')>\nu,\, \tilde{N}^{2R}_b(\nabla u)(x')\le\gamma \nu\Big\}\right| \nonumber\\[4pt] &\hskip 0.50in \quad\le C(\gamma)\left|\big\{x'\in \Delta_R:\,{S}_{q,b}(\nabla_Tu)(x')>\nu/2\big\}\right|. \end{align} \end{lemma} \begin{proof} It follows from \eqref{square02aa-loc} (by well-familiar averaging) that \begin{equation}\label{eq-tourm} \|S^{R}_{q, b}(\nabla_Tu)\|_{L^q(\Delta_R)} \lesssim \|N^{2R}_b(\nabla u)\|_{L^q(\Delta_{2R})}. \end{equation} Therefore, \begin{align}\label{Sqwithnu} \hskip -0.20in \big| \Delta_R \cap \{S^R_q > \nu/2\} \big| &\lesssim \nu^{-q} \|N^{2R}_b(\nabla u)\|^q_{L^q(\Delta_{2R})}\\ & \lesssim \nu^{-q} \|N^{2R}_b(\nabla u)\|^{q/p}_{L^p(\Delta_{2R})} \big| \Delta_{2R} \big|^{1-q/p} \nonumber\\ &\lesssim C_{\varepsilon}\nu^{-p} \int_{\Delta_{2R}} (N^{2R}_b(\nabla u))^p + \varepsilon \big|\Delta_{R}\big|. \end{align} Choosing $\varepsilon = 1/4$, which determines $C_{\varepsilon}$, and we now fix $C = 4C_{\varepsilon}$ in the definition of $\nu_0$. This implies that for any $\nu>\nu_0$, we have that $$\big| \Delta_R \cap \{S^R_{q,b} > \nu/2\} \big| < 1/2 \big| \Delta_R\big|.$$ Thus, there exists a Whitney decomposition of $\Delta_R \cap \{S^R_{q,b} > \nu/2\}$ into open cubes $\Delta_i$ with the property that $2\Delta_i \cap \Delta_R$ contains a point for which $S^R_{q,b}(\nabla_T u) < \nu/2.$ From this point on, the proof proceeds as in Lemma \ref{LGL2}. \end{proof} \begin{corollary}\label{S4:C1} Under the assumption of Lemma \ref{LGL2}, for any $q \geq p >1$ and $a>0$ there exists a finite constant $C=C(\lambda_p,\Lambda,p,q,a,\|\mu\|_{\mathcal C},n)>0$ such that \begin{equation}\label{S3:C7:E00oo=sdd} \|S^R_{p,a}(\nabla_T u)\|_{L^{q}({\Delta_{R}})} \le C\|\tilde{N}^{2R}_{p,a}(\nabla u)\|_{L^{q}({\Delta_{2R}})}, \end{equation} \begin{equation}\label{S3:C7:E00oo=s} \|S_{p,a}(\nabla_Tu)\|_{L^{q}({\BBR}^{n-1})}\le C\|\tilde{N}_{p,a}(\nabla u)\|_{L^{q}({\BBR}^{n-1})}. \end{equation} The inequality \eqref{S3:C7:E00oo=s} also holds for any $q > 0$, provided we know a priori that $\|S_{p,a}(\nabla_Tu)\|_{L^{q}({\BBR}^{n-1})}<\infty$. \end{corollary} \begin{proof} The estimate \eqref{S3:C7:E00oo=sdd} is a consequence of the local good-$\lambda$ inequality established above and the equivalence (\cite{CMS}) of $p$-adapted square functions with different aperture in any $L^q$ norm. When $q \geq p$, and $M$ is large, $$ \int_0^M \nu^{q-1} \big| \Delta_R \cap \{S^R_{p,a}(\nabla_Tu) > \nu\}\big| d\nu \leq C(M) \int_0^M \nu^{p-1} \big| \Delta_R \cap \{S^R_{p,a}(\nabla_Tu) > \nu\}\big| d\nu. $$ By \eqref{eq-tourm} and the fact that the coefficients are smooth, the right hand side is finite. Therefore, the left hand side is also bounded, with a constant that may depend on $M$. Now we multiply the good-$\lambda$ inequality of Lemma \ref{LocalGoodL} by $\nu^{p-1}$ and integrate separately over $(0, \nu_0)$ and $(\nu_0, M)$. This gives $$ \|S^R_{p,a}(u)\|_{L^{q}({\Delta_{R}})} \le C\|\tilde{N}^{2R}_{p,a}(u)\|_{L^{q}({\Delta_{2R}})}, $$ \noindent after taking the limit as $M \to \infty$. The estimate \eqref{S3:C7:E00oo=s} follows by taking the limit $R\to\infty$. When $q<p$, the local good-$\lambda$ inequality is not available, which is why we need the additional a priori assumption $\|S_{p,a}(\nabla_Tu)\|_{L^{q}({\BBR}^{n-1})}<\infty$. The proof proceed otherwise as above but using Lemma \ref{LGL2}. \end{proof} So far we have avoided considering the square function of $\partial_0u$. We remedy it now. Observe that since $$|\nabla(\partial_0 u)|\le |\partial^2_{00}u|+|\nabla(\nabla_Tu)|,$$ we can use previous calculations for the square function of $\nabla_Tu$ and focus on $\partial^2_{00}u$. Since $u$ solves ${\mathcal L}u=0$ and $A_{00}=1$ we have for $$\partial^2_{00}u=-\sum_{(i,j)\ne (0,0)}\partial_i(A_{ij}\partial_ju)-\sum_iB_i\partial_i u.$$ It follows that we have the estimate: \begin{equation}\label{zertex1} S^R_{2,a}(\partial_0 u)(x')\le S^R_{2,a}(\nabla_T u)(x') +C\,{\mathcal T}^R_{a}(\nabla u)(x'), \end{equation} where we define \begin{equation}\label{zertex2} {\mathcal T}^R_{a}(\nabla u)(Q)=\left(\int_{\Gamma^R_a(Q)}(|\nabla A|^2+|B|^2)|\nabla u|^2\delta(x)^{2-n}dx\right)^{1/2}, \end{equation} Considering the same $\Omega_g$ as above we have an analogue of \eqref{square01aav3}: \begin{equation}\label{square01aav3-2}\begin{split} &\hskip-1cm\iint_{([0,r]\times \Delta)\cap\Omega_g}(|\nabla A|^2+|B|^2)|\nabla u|^2x_0\,dx'dx_0\\ \hskip-1cm&\le C\|\mu\|_{\mathcal C}\int_{T(2\Delta)\times\partial\Omega_g} \left[\tilde{N}^{2r}_{p,a,g}(\nabla u)\right]^2 \,dx. \end{split}\end{equation} If follows we can establish a good-lambda inequality analogous to Lemma \ref{LGL2}. \begin{lemma}\label{LGL} Let $\Omega=\BBR^n_+$ and assume $u$ be the energy solution of \eqref{eq-zoncolan}. Assume that $A$ is $2$-elliptic and smooth in $\BBR^n_+$ with $A_{00} =1$ and $A_{0j}$ real and that the measure $\mu$ defined as in \eqref{Car_hatAA} is Carleson. Consider any $b>a>0$. Then for each $\gamma\in(0,1)$ there exists a constant $C(\gamma)>0$ such that $C(\gamma,a,b)\to 0$ as $\gamma\to 0$ and with the property that for each $\nu>0$ we have \begin{align}\label{eq:gl2v2} &\hskip -0.20in \left|\Big\{x'\in {\BBR}^{n-1}:\,{\mathcal T}_{a}(\nabla u)(x')>\nu,\,\|\mu\|_{\mathcal{C}}^{1/2}\tilde{N}_b(\nabla u)(x')\le\gamma \nu\Big\}\right| \nonumber\\[4pt] &\hskip 0.50in \quad\le C(\gamma)\left|\big\{x'\in{\BBR}^{n-1}:\,{\mathcal T}_{b}(\nabla u)(x')>\nu/2\big\}\right|. \end{align} \end{lemma} We omit the proof as it follows the same idea as the proof of Lemma \ref{LGL2} using \eqref{square01aav3-2} in place of \eqref{square01aav3}. Also averaging in $c$ is not needed. We also have an analogue of Lemma \ref{LocalGoodL} by the same argument. We record the consequences of these two results. \begin{corollary}\label{S4:C2} Under the assumption of Lemma \ref{LGL2}, for any $q \geq 2$ and $a>0$ there exists a finite constant $C=C(\lambda_2,\Lambda,q,a,\|\mu\|_{\mathcal C},n)>0$ such that \begin{equation}\label{S3:C7:E00oo=sdd2} \|S^R_{2,a}(\partial_0 u)\|_{L^{q}({\Delta_{R}})} \le C\left[\|S^{R}_{2,a}(\nabla_T u)\|_{L^{q}(\Delta_{2R})}+\|\mu\|_{\mathcal{C}}^{1/2}\|\tilde{N}^{2R}_{2,a}(\nabla u)\|_{L^{q}({\Delta_{2R}})}\right], \end{equation} \begin{equation}\label{S3:C7:E00oo=s2} \|S_{2,a}(\partial_0 u)\|_{L^{q}({\BBR}^{n-1})}\le C\left[\|S_{2,a}(\nabla_T u)\|_{L^{q}({\BBR}^{n-1})}+\|\mu\|_{\mathcal{C}}^{1/2}\|\tilde{N}_{2,a}(\nabla u)\|_{L^{q}({\BBR}^{n-1})}\right]. \end{equation} The inequality \eqref{S3:C7:E00oo=s2} also holds for any $q > 0$, provided we know a priori that \newline $\|{\mathcal T}_{a}(\nabla u)\|_{L^{q}({\BBR}^{n-1})} < \infty.$ \end{corollary} We are now ready to establish a local solvability result. Let us consider domains of the following the form. Let $\Delta_d\subset{\mathbb R}^{n-1}$ be a boundary ball or a cube of diameter $d$. We denote by ${\mathcal O}_{\Delta_d,a}$ \begin{equation}\label{Odom} {\mathcal O}_{\Delta_d,a}=\bigcup_{x'\in\Delta_d}\Gamma_a(x'). \end{equation} Here as before $\Gamma_a(x')$ denotes the nontangential region with aperture $a$ at a point $x'$ (c.f. Definition \ref{DEF-1}). Clearly, a domain such as \eqref{Odom} is a domain with Lipschitz constant $1/a$. It follows that if ${\mathcal L}$ satisfies assumptions of this Theorem \ref{S3:T0} on ${\mathbb R}^{n}_+$ it also satisfies it on any domain ${\mathcal O}_{\Delta_d,a}$, provided $1/a$ is sufficiently small. This can be seen via the pullback transformation \eqref{E:rho} which transforms the problem from ${\mathcal O}_{\Delta_d,a}$ back to ${\mathbb R}^n_+$. This modifies the coefficients of our PDE to say \begin{equation}\label{eq-pf14} \mbox{div}(\bar{A}\nabla v)=0. \end{equation} In particular, if the original PDE on ${\mathcal O}_{\Delta_d,a}$ satisfies $A_{00}=1$ and $A_{0j}$ are real, the modified coefficients $\bar{A}$ will fail to do so. However, we could fix that via the change of coefficients discussed in \eqref{eqSWAP} together with the observations noted below. It follows that \eqref{eq-pf14} can be rewritten as \begin{equation}\label{eq-pf15} \mbox{div}(\tilde{A}\nabla v)+\tilde{B}\cdot \nabla v=0. \end{equation} Because $1/a$ is small the coefficient $\bar{A}_{00}$ is close to $1$ and $\bar{A}_{0j}$ are almost real. It follows that rewriting \eqref{eq-pf14} as \eqref{eq-pf15} will not destroy the ellipticity and $p$-ellipticity of the matrix $\tilde{A}$. Hence our previous results of this section apply as they were developed for operators ${\mathcal L}$ with first order (drift) terms. We note that in section \ref{SS:43}, drift terms are not allowed, but the results of this section can be applied to the PDE \eqref{eq-pf14} because the special assumptions on $\bar{A}_{0j}$ are not used there. To discuss solvability on domain ${\mathcal O}_{\Delta_d,a}$ we need to consider the nontangential maximal function $\tilde N$ that is taken with respect to nontangential approach regions that are contained inside ${\mathcal O}_{\Delta_d,a}$; that is we need to take regions $\Gamma_b(\cdot)$ for any $b<a$. Without loss of generality we choose $b=a/2$ and fix it for the remaining part of this section. Finally, $\nabla_T u$ at the boundary of ${\mathcal O}_{\Delta_d,a}$ is understood to be the tangential component of the gradient with respect to the boundary of this domain. For ease of notation we drop the dependence of the domain ${\mathcal O}_{\Delta_d,a}$ on $\Delta_d$ and $a$ and use ${\mathcal O}={\mathcal O}_{\Delta_d,a}$. We have the following result. \begin{lemma}\label{S6:L1} Let $\mathcal L$ be as in Theorem \ref{S3:T0} on the domain ${\mathbb R}^n_{+}$ and let $A$ be $q$-elliptic for some $q\ge 2$. Let $\mathcal O$ be a Lipschitz domain as above and assume $u$ is an arbitrary energy solution of ${\mathcal L}u=0$ in ${\mathbb R}^n_+$ with the Dirichlet boundary datum $\nabla_T f\in L^{q}(\partial \mathcal O;{\BBR}^N)$. Then there exists $m=m(a)>1$ and $K=K(\lambda_p,\Lambda,n,p)>0$ such that if $$\|\mu\|_{\mathcal C}+a^{-1}<K,$$ the following estimate holds: \begin{equation}\label{Main-Estlocxx} \|\tilde{N}_{a/2} (\nabla u)\|_{L^{q}(\Delta_d)}\leq C_q\|\nabla_T f\|_{L^{q}(\partial{\mathcal O\cap \overline{T(\Delta_{md})}})}+C_qd^{(n-1)/q}\sup_{x\in \mathcal O\cap\{\delta(x)>d\}}W_2(x), \end{equation} where $\delta(x)=\mbox{dist}(x,\partial{\mathbb R}^n_+)$ and $W_2(x)=\left(\Xint-_{B_{\delta(x)/4}(x)}|\nabla u(y)|^2 dy)\right)^{1/2}$. \end{lemma} \begin{proof} In last term of \eqref{Main-Estlocxx} because of the way $\mathcal O$ is defined we clearly have \begin{equation}\label{brmbrm} \{(x_0,x')\in\mathcal O:\, x'\notin\Delta_{(1+a)d}\}\subset \mathcal O\cap\{\delta(x)>d\}. \end{equation} If follows that by considering the pull-back map $\rho:{\mathbb R}^n_+\to\mathcal O$ defined in \eqref{E:rho} proving \eqref{Main-Estlocxx} is equivalent to establishing \begin{equation}\label{Main-Estlocxxy} \|\tilde{N} (\nabla u)\|_{L^{q}(\Delta_d)}\leq C\|\nabla_T f\|_{L^{q}(\Delta_{md};{\BBR}^N)}+Cd^{(n-1)/q}\sup_{x\in {\mathbb R}^n_+\setminus T(\Delta_{(1+a)d})}W_2(x), \end{equation} where we now work on the domain ${\mathbb R}^n_+$ with $u$ solving $\mathcal Lu=0$ in ${\mathbb R}^n_+$ for $\mathcal L$ as in Theorem \ref{S3:T0}. We start with the term on the lefthand side of \eqref{Main-Estlocxxy}. If follows from \eqref{S3:C7:E00ooloc} that for some $m_1>1+a$ \begin{equation}\label{S3:C7:E00ooloc-2} \|\tilde{N}^{(1+a)d}_a(\nabla u)\|^q_{L^{q}(\Delta_d)}\le C\|S^{m_1d}_a(\nabla u)\|^q_{L^{q}(\Delta_{m_1d})}+Cd^{n-1}|\widetilde{\nabla u}(A_d)|^q. \end{equation} The last term above has a trivial bound by $Cd^{n-1}\sup_{x\in {\mathbb R}^n_+\setminus T(\Delta_{(1+a)d})}[W_2(x)]^q$. By \eqref{S3:C7:E00oo=sdd2} we have for $m_2=2m_1$: \begin{equation}\nonumber \|S^{m_1d}_{a}(\nabla u)\|_{L^{q}({\Delta_{m_1d}})} \le C\left[\|S^{m_2d}_{a}(\nabla_T u)\|_{L^{q}(\Delta_{m_2d})}+\|\mu\|_{\mathcal{C}}^{1/2}\|\tilde{N}^{m_2d}_{a}(\nabla u)\|_{L^{q}({\Delta_{m_2d}})}\right]. \end{equation} Using the H\"older inequality we have for any $x'\in{\mathbb R}^{n-1}$ \begin{align} &\hskip-5mm\left[S_{2,a}(\nabla_T u)(x')\right]^2=\sum_{k>0}\iint_{\Gamma_a(x')}|\nabla\partial_k u|^{2/q}|\partial_k u|^{1-2/q} |\nabla\partial_k u|^{2/q'}|\partial_k u|^{1-2/q'}x_0\,dx'\,dx_0\nonumber\\ &\le \sum_{k>0}\left(\iint_{\Gamma_a(x')}|\nabla\partial_k u|^2|\partial_k u|^{q-2} x_0\,dx'\,dx_0\right)^{1/q}\times\nonumber\\ &\hskip3cm\left(\iint_{\Gamma_a(x')}|\nabla\partial_k u|^2|\partial_k u|^{q'-2} x_0\,dx'\,dx_0\right)^{1/q'}\label{eq-pf3a}\\ \nonumber &\le S_{2,q}(\nabla_T u)(x')S_{2,q'}(\nabla_T u)(x'). \end{align} Hence the previous line implies that for any $\varepsilon>0$ we have \begin{equation}\label{eq-qq'} S_{2,a}(\nabla_T u)(x')\le C_\varepsilon S_{q,a}(\nabla_T u)(x')+\varepsilon S_{q',a}(\nabla_T u)(x'), \end{equation} and the same inequality holds for the truncated square functions. Observe that $q\ge q'$ and hence we can use \eqref{S3:C7:E00oo=sdd} to estimate the second term. This gives us \begin{align}\nonumber \|S^{m_2d}_a(\nabla_T u)\|^q_{L^q(\Delta_{m_2d})}\le C_\varepsilon\|S^{m_2d}_{q,a}(\nabla_T u)\|^q_{L^q(\Delta_{m_2d})}+ \varepsilon^q\|\tilde{N}^{m_3d}(\nabla u)\|^q_{L^{q}(\Delta_{m_3d})} \end{align} For some $m_3>m_2$. We choose $\varepsilon$ so that $\varepsilon^q= \|\mu\|^{q/2}_{\mathcal C}$. The estimates we have so far can be combined to the following estimate: \begin{align}\label{eq-summaryest} \|\tilde{N}^{(1+a)d}_a(\nabla u)\|^q_{L^{q}(\Delta_d)}&\le C\|S^{m_2d}_{q,a}(\nabla_T u)\|^q_{L^q(\Delta_{m_2d})}\\&\nonumber+C\|\mu\|_{\mathcal{C}}^{q/2}\|\tilde{N}^{m_3d}_{a}(\nabla u)\|^q_{L^{q}({\Delta_{m_3d}})}\\&\nonumber+ Cd^{n-1}\sup_{x\in {\mathbb R}^n_+\setminus T(\Delta_{(1+a)d})}[W_2(x)]^q. \end{align} To estimate the first term on the righthand side we use \eqref{square02aa-loc}. This gives \begin{eqnarray}\label{altsim} &&\|S^{m_2d}_{q,a}(\nabla_T u)\|^q_{L^q(\Delta_{m_2d})}\\ &\lesssim& \int_{\Delta_{m_3d}}|\nabla_T u(0,x')|^{q}\,dx' +\int_{\Delta_{m_3d}}|\nabla_T u(m_3d,x')|^{q}\,dx'\nonumber\\ &&\quad+\|\mu\|_{\mathcal{C}}\int_{\Delta_{m_3d}}\left[\tilde{N}^{m_3d}(\nabla u)\right]^{q}\,dx'+C\int_{\Delta_{m_3d}\setminus\Delta_{m_2d}}\left[\tilde{N}^{m_3d}(\nabla u)\right]^{p}\,dx'.\nonumber \end{eqnarray} Observe that if the estimate above holds for certain $m_3>1$ it will certainly holds for any larger value, say $2m_3$. Hence we can average the estimate on the righthand side of \eqref{square02aa-loc} between $m_3$ and $2m_3$. This turns the second term on the righthand side of \eqref{square02aa-loc} into a solid integral over a set that is contained in ${\mathbb R}^n_+\setminus T(\Delta_{(1+a)d})$ and therefore bounded by $Cd^{n-1}\sup_{x\in {\mathbb R}^n_+\setminus T(\Delta_{(1+a)d})}[W_2(x)]^q$. Hence we have for $m_4=2m_3$ thanks to \eqref{eq-summaryest}: \begin{eqnarray}\label{altsim3} &&\|\tilde{N}^{(1+a)d}(\nabla u)\|^q_{L^{q}(\Delta_d)}\lesssim \int_{\Delta_{m_4d}}|\nabla_T f(x')|^{q}\,dx' \\ &&\quad+\max\{\|\mu\|_{\mathcal{C}},\|\mu\|_{\mathcal{C}}^{q/2}\}\int_{\Delta_{m_4d}}\left[\tilde{N}^{(1+a)d}(\nabla u)\right]^{q}\,dx'\nonumber\\ &&\quad\nonumber +C\int_{\Delta_{m_4d}\setminus\Delta_{m_2d}}\left[\tilde{N}^{(1+a)d}(\nabla u)\right]^{p}\,dx'\\ &&\quad +d^{n-1}\sup_{x\in {\mathbb R}^n_+\setminus T(\Delta_{(1+a)d})}[W_2(x)]^q.\nonumber \end{eqnarray} Here we truncated $\tilde N$ on the righthand side at the height $(1+a)d$ instead of $m_4d$ since everything above this height can be incorporated into the last term. Clearly, for sufficiently small $\|\mu\|_{\mathcal{C}}$ we can hide part of the second term in the last line on the righthand side of \eqref{altsim4a}. Hence \begin{eqnarray}\label{altsim4} &&\|\tilde{N}^{(1+a)d}(\nabla u)\|^q_{L^{q}(\Delta_{d})} \lesssim \int_{\Delta_{m_4d}}|\nabla_T f(x')|^{q}\,dx' \\ &+&C\|\tilde{N}^{(1+a)d}(\nabla u)\|^q_{L^{q}(\Delta_{m_4d}\setminus\Delta_{d})} +d^{n-1}\sup_{x\in {\mathbb R}^n_+\setminus T(\Delta_{(1+a)d}))}[W_2(x)]^q.\nonumber \end{eqnarray} Clearly, the last estimate is scale invariant and so we write it instead for an enlarged ball $\Delta_{(1+d)d}$. We do this to have in the second term $\Delta_{m_5d}\setminus\Delta_{(1+a)d}$ where $m_5=(1+a)m_4$. Since $\|\tilde{N}^{(1+a)d}(\nabla u)\|_{L^{q}(\Delta_{d})}\le \|\tilde{N}^{(1+a)d}(\nabla u)\|_{L^{q}(\Delta_{(1+a)d})}$ this gives us: \begin{eqnarray}\label{altsim4a} &&\|\tilde{N}^{(1+a)d}(\nabla u)\|^q_{L^{q}(\Delta_{d})} \lesssim \int_{\Delta_{m_5d}}|\nabla_T f(x')|^{q}\,dx' \\ &+&C\|\tilde{N}^{(1+a)d}(\nabla u)\|^q_{L^{q}(\Delta_{m_5d}\setminus\Delta_{(1+a)d})} +d^{n-1}\sup_{x\in {\mathbb R}^n_+\setminus T(\Delta_{(1+a)d}))}[W_2(x)]^q.\nonumber \end{eqnarray} We now push-forward \eqref{altsim4a} back to the original domain ${\mathcal O}$. We have \begin{align}\nonumber \|\tilde{N}^{(1+a)d}_{a/2} (\nabla u)\|^q_{L^{q}(\Delta_d)}&\leq C\|\nabla_T f\|^q_{L^{q}(\partial{\mathcal O\cap \overline{T(\Delta_{m_5d})}})}+Cd^{n-1}\sup_{x\in \mathcal O\cap\{\delta(x)>d\}}W_2(x)^q\\\label{Main-Estlocxx-brb} &\quad +C\|\tilde{N}^{(1+a)d}(\nabla u)\|^q_{L^{q}({\partial \mathcal O}\cap [T(\Delta_{m_5d})\setminus T(\Delta_{(1+a)d})])}. \end{align} We would like to hide the last term. Observe that all points of ${\partial O}\cap[T(\Delta_{m_5d})\setminus T(\Delta_{(1+a)d})]$ are in the interior of the original domain ${\mathbb R}^n_+$ of distance at least $d$ away from the boundary of ${\mathbb R}^n_+$. Hence whenever we were applying the Theorem \ref{T:Car} we could have in fact used \eqref{Ca-222-x} there with $h$ being the function describing the boundary of $\mathcal O$. Since pointwise for $Q\in \partial \mathcal O\cap [T(\Delta_{m_5d})\setminus T(\Delta_{(1+a)d})]$ $$\tilde{N}_{a,h}(\nabla u)(Q)\le \sup_{x\in \mathcal O\cap\{\delta(x)>d\}}W_2(x)$$ the last term can be estimated by $Cd^{n-1}\sup_{x\in \mathcal O\cap\{\delta(x)>d\}}W_2(x)^q$ as well. Finally, we can remove the truncation of $\tilde N$ at height $(1+a)d$ on the lefthhand side of \eqref{Main-Estlocxx-brb} as for points above this height again the term $Cd^{n-1}\sup_{x\in \mathcal O\cap\{\delta(x)>d\}}W_2(x)^q$ controls the nontangential maximal function. This establishes our claim. \end{proof} \section{Proof of Theorem \ref{S3:T0}.}\label{S5} We will establish the solvability of the Regularity problem assuming that the coefficients of $A$ and $B$ are smooth, applying the results of the previous two sections. The constants in the estimates will not depend on the degree of smoothness. Then, considering smooth approximations of ${\mathcal L}$, a limiting argument gives Theorem \ref{S3:T0} in the general case. We start with $p=2$. Assume the matrix $A$ is $2$-elliptic. It follows that Lemma \ref{S6:L1} applies. For any $K$ as in the Lemma for any $\|\mu\|_{\mathcal C}<K$ we pick $a$ such that $\|\mu\|_{\mathcal C}+a^{-1}<K$. Consider any $f\in L^2(\partial\mathbb R^n_+)\cap \dot{B}^{2,2}_{1/2}(\partial\mathbb R^n_+)$ and let $u\in\dot{W}^{1,2}(\mathbb R^n_+)$ be the unique energy solution of $\mathcal Lu=0$ with boundary datum $f$. We shall additionally assume the $f$ is a smooth compactly supported function, it suffices to establish our estimates for those as such functions form a dense subset of $L^2(\partial\mathbb R^n_+)\cap \dot{B}^{2,2}_{1/2}(\partial\mathbb R^n_+)$. Fix $d>0$ and consider $\Delta=\Delta_d(0)$. We apply Lemma \ref{S6:L1} to the domains ${\mathcal O}_\tau={\mathcal O}_{\tau\Delta,a}$, for $\tau\in [1,2]$. This gives us \begin{equation}\label{Main-Estlocxx2} \|\tilde{N}_{a/2} (\nabla u)\|^2_{L^{2}(\Delta)}\leq C\|\nabla_T f\|^2_{L^{2}(\partial{\mathcal O_\tau\cap \overline{T(\tau m\Delta)}})}+Cd^{n-1}\sup_{x\in {\mathcal O}_\tau \cap\{\delta(x)>d\}}W_2(x)^2. \end{equation} Note that each of the sets $\partial{\mathcal O_\tau\cap \overline{T(\tau m\Delta)}}$ consists of the \lq\lq flat piece" that is just $\tau\Delta=\Delta_{\tau d}(0)$ and the remaining curve that lies inside $\mathbb R^n_+$. If we average the above inequality over all values of $\tau\in [1,2]$ the latter turns into a solid integral over a set that is contained in $${\mathcal S}_d:=(0,2md)\times (\Delta_{2md}\setminus \Delta_d).$$ It follows that \begin{align}\label{Main-Estlocxx2-av} \|\tilde{N}_{a/2} (\nabla u)\|^2_{L^{2}(\Delta)}\leq& C\|\nabla_T f\|^2_{L^{2}(2\Delta)}+Cd^{n-1}\sup_{\{x:\,\delta(x)>d\}}W_2(x)^2\\ &+Cd^{-1}\iint_{\mathcal S_d}|\nabla u|^2 dx.\nonumber \end{align} Consider what happens as we take $d\to\infty$ in the estimate above. Recall that we know that $\nabla u\in L^2(\mathbb R^n_+)$ from the fact that $u$ is an energy solution. This information implies that both $$\iint_{B(x,\delta(x)/2)}|\nabla u|^2\,dx\to 0,\qquad \iint_{\mathcal S_d}|\nabla u|^2 dx\to 0,$$ for all $x\in \{x:\delta(x)>d\}$ uniformly as $d\to\infty$. From this however we see that the last two terms of \eqref{Main-Estlocxx2-av} go to zero as $d\to\infty$ and hence in the limit we have $$\|\tilde{N}_{a/2} (\nabla u)\|^2_{L^{2}(\partial\mathbb R^n_+)}\leq C\|\nabla_T f\|^2_{L^{2}(\partial\mathbb R^n_+)},$$ which is $L^2$ solvability of the Regularity problem. Also observe that constant $C$ in the estimate above only depends on $\lambda_2,\, \Lambda$ and $n$, precisely as stated in Theorem \ref{S3:T0}. We now extrapolate. It has been established in \cite{DHM} that, from Lemma \ref{S6:L1}, a purely real variable argument can be used to establish the following estimate \begin{equation}\label{Main-Estloc5} \int_{E_\nu\cap\{g\le\nu\}}\left[\tilde{N} (\nabla u)(x')\right]^2\,dx'\le C_\alpha\nu^2|E_\nu| +C\alpha^{-1}\int_{E_\nu}\left[\tilde{N} (\nabla u)(x')\right]^2\,dx', \end{equation} where $E_\nu=\{x'\in\mathbb R^{n-1}_+:\,\tilde{N}_\alpha (\nabla u)(x')>\nu \}$ and $$g(x')=\sup_{B\ni x'}\left(\Xint-_{B}|\nabla_Tf(y')|^2dy'\right)^{1/2}.$$ See in particular Lemma 6.1 and (6.17) of \cite{DHM} which are completely analogous to our Lemma \ref{S6:L1} \eqref{Main-Estloc5}. A consequence of \eqref{Main-Estloc5} is an existence of $\delta>0$ which only depends on the constant in the estimate \eqref{Main-Estlocxx} such that \begin{equation}\label{goodgrief} \|\tilde{N}_{a/2} (\nabla u)\|_{L^{2+\delta}(\partial\mathbb R^n_+)}\leq C\|\nabla_T f\|_{L^{2+\delta}(\partial\mathbb R^n_+)}, \end{equation} which is the solvability of the Regularity problem for $p_0=2+\delta$. If $p_0$ is such that the matrix $A$ is $p_0$-elliptic we can repeat the process above we did for $p=2$. We now apply Lemma \ref{S6:L1} for the value $p_0$ and again take the limit $d\to\infty$. This time the solid integrals we get are $$d^{-1}\iint_{B(x,\delta(x)/2)}|\nabla u|^{p_0}\,dx,\qquad d^{-1}\iint_{\mathcal S_d}|\nabla u|^{p_0} dx,$$ which we know go to zero uniformly for all $x\in \{x:\delta(x)>d\}$ as $d\to\infty$ thanks to the fact that \eqref{goodgrief} implies that $\|\tilde{N}_{p_0,a/2} (\nabla u)\|_{L^{p_0}}<\infty$. Hence taking the limit $d\to\infty$ in the analogue of \eqref{Main-Estlocxx2-av} for $p_0$ yields \begin{equation}\label{goodgrief2} \|\tilde{N}_{a/2} (\nabla u)\|_{L^{p_0}(\partial\mathbb R^n_+)}\leq C_{p_0}\|\nabla_T f\|_{L^{p_0}(\partial\mathbb R^n_+)}. \end{equation} This seemingly is just a restatement of \eqref{goodgrief}. The difference however is that now the constant $C_{p_0}$ in \eqref{goodgrief2} only depends on the constant in Lemma \ref{S6:L1} for the value $p_0$. This allows us to extrapolate again and obtain solvability of the Regularity problem for some value $p_0+\delta'$. There is no difference in the structure of the argument. We can continue this bootstrapping as long as we stay in the range of $p$-ellipticity and as long as we can be sure that we are moving by an amount $\delta'$ which is not getting smaller at each step. This last point is assured by the fact that the constants $C_{p_0}$ in the $L^{p_0}$ norm inequalities (\ref{goodgrief2}) only depend on the $p_0$-ellipticity and the Carleson measure norm of the coefficients. If we fix $p>2$ such that the operator is $p$-elliptic the constants $C_q$ for $2\le q\le p$ in Lemma \ref{S6:L1} are uniformly bounded which assures that our bootstrapping argument will reach the desired value $p$ is finitely many steps giving us solvability of the Regularity problem and the estimate \eqref{Main-Est} of Theorem \ref{S3:T0}. We now deal with $p<2$ such that $A$ is $p$-elliptic. Assume first that we a priori know that $\|\tilde{N}_{2,a}(\nabla u)\|_{L^p(\mathbb R^{n-1})}<\infty$ for an energy solution $u$ in ${\mathbb R^n_+}$ with boundary datum $f$. Then by \eqref{S3:C7:E00oo} of Proposition \ref{S3:C7} and by \eqref{S3:C7:E00oo=s2} of Corollary \ref{S4:C2} we have \begin{align} \|\tilde{N}_{2,a}(\nabla u)\|_{L^p(\mathbb R^{n-1})}&\le C\|S_{2,a}(\nabla u)\|_{L^p(\mathbb R^{n-1})}\\ &\le C\|S_{2,a}(\nabla_T u)\|_{L^p(\mathbb R^{n-1})}+C\|\mu\|^{1/2}\|\tilde{N}_{2,a}(\nabla u)\|_{L^p(\mathbb R^{n-1})}.\nonumber \end{align} Here in order to use Corollary \ref{S4:C2}, we must verify that $\|\mathcal{T}_{a}(\nabla u)\|_{L^p(\mathbb R^{n-1})}<\infty$. However under the assumption that the coefficients are smooth we have a pointwise bound $\mathcal{T}_{a}(\nabla u)(Q)\le S_{2,a}(u)(Q)$. We have established solvability of the $L^p$ Dirichlet problem in the paper \cite{DPcplx} in the range where $p$-ellipticity holds and in particular we have shown the bound $\|S_{2,a}(u)\|_{L^p(\mathbb R^{n-1})}\lesssim\|f\|_{L^p(\mathbb R^{n-1})}<\infty$ (using that $f\in C_0^\infty\subset L^p$). Hence taking sufficiently small $K$ in Theorem \ref{S3:T0} it follows that \begin{align} \|\tilde{N}_{2,a}(\nabla u)\|_{L^p(\mathbb R^{n-1})}&\le C\|S_{2,a}(\nabla_T u)\|_{L^p(\mathbb R^{n-1})}.\label{eq-77} \end{align} Hence by \eqref{eq-qq'} in conjunction with \eqref{S3:C7:E00oo=s} of Corollary \ref{S4:C1} and \eqref{NN} implies that \begin{align}\label{zertex4} \|S_{2,a}(\nabla_T u)\|_{L^p(\mathbb R^{n-1})}\le C\|S_{p,a}(\nabla_T u)\|_{L^p(\mathbb R^{n-1})}+\varepsilon\|\tilde{N}_{2,a}(\nabla u)\|_{L^p(\mathbb R^{n-1})}. \end{align} When applying \eqref{S3:C7:E00oo=s} to estimate $\|S_{p',a}(\nabla_T u)\|_{L^p(\mathbb R^{n-1})}$ we need to know a priori that this quantity is finite. Here we use our assumption that for now the coefficients are smooth. This gives is a point-wise bound $$S_{p',a}(\nabla_T u)\le CS_{2,a}(\nabla_T u)+CN(\nabla u),$$ where $N$ is the pointwise maximal function. The classical $L^\infty$ bounds of Agmon-Douglis-Nirenberg \cite{ADN} for smooth PDE systems imply $N\lesssim N_2$. We also have $\|S_{2,a}(\nabla_T u)\|_{L^p}<\infty$ from a similar estimate $$S_{2,a}(\nabla_T u)\le CS_{p,a}(\nabla_T u)+CN(\nabla u),$$ and finally we know that $\|S_{p,a}(\nabla_T u)\|_{L^p(\mathbb R^{n-1})}<\infty$ by \eqref{square02aa} (taking $r\to\infty$). The one \lq\lq bad" term in \eqref{square02aa} which is $\int_{\mathbb R^{n-1}}|\nabla_T u(r,x')|^pdx'$ can be dealt with by averaging in $r$ first which turns it into a solid integral. Such term can be estimated by $\|\tilde{N}_{p,a}(\nabla u)\|_{L^p(\mathbb R^{n-1})}\lesssim \|\tilde{N}_{2,a}(\nabla u)\|_{L^p(\mathbb R^{n-1})}<\infty$ and furthermore it follows this term converges to zero as $r\to\infty$. Hence all quantities appearing in \eqref{zertex4} are finite under the assumption our coefficients are smooth, but the constants in this estimate only depend on the parameters $n,p,\lambda_p,\Lambda$. We choose $\varepsilon>0$ in this inequality small enough so that we can hide this term on the lefthand side of \eqref{eq-77}. This gives is \begin{align} \|\tilde{N}_{2,a}(\nabla u)\|_{L^p(\mathbb R^{n-1})}&\le C\|S_{p,a}(\nabla_T u)\|_{L^p(\mathbb R^{n-1})}.\label{eq-78} \end{align} We can now use again \eqref{square02aa} for $S_{p,a}(\nabla_T u)$ taking $r\to\infty$. As explained above the term $\int_{\mathbb R^{n-1}}|\nabla_T u(r,x')|^pdx'$ gets eliminated. It follows that \eqref{square02aa} gives us \begin{align} \|S_{p,a}(\nabla_T u)\|_{L^p(\mathbb R^{n-1})}&\le C\|\nabla_T f\|_{L^p(\mathbb R^{n-1})}+C\|\mu\|_{\mathcal C}^{1/p}\|\tilde{N}_{p,a}(\nabla u)\|_{L^p(\mathbb R^{n-1})}.\label{eq-79} \end{align} Hence for all $\|\mu\|_{\mathcal C}<K$ sufficiently small combination of \eqref{eq-78}, \eqref{eq-79} and \eqref{NN} yields \begin{align}\label{goodgrief3} \|\tilde{N}_{2,a}(\nabla u)\|_{L^p(\mathbb R^{n-1})}&\le C\|\nabla_T f\|_{L^p(\mathbb R^{n-1})}, \end{align} from which solvability of the $L^p$ Regularity problem follows. \vglue1mm It remain to remove the a priori assumption $\|\tilde{N}_{2,a}(\nabla u)\|_{L^p(\mathbb R^{n-1})}<\infty$ we have made earlier. We again argue by extrapolation starting with $p=2$ where we know this since we have already established solvability of the Regularity problem for this value of $p$. This time we shall use an extrapolation argument based on an method in \cite{DKV} of obtaining $L^{2-\varepsilon}$ estimates of nontangential maximal functions from $L^2$ estimates on sawtooth domains. See also \cite{DHM}, where this technique was used to get solvability of the $L^p$ Dirichlet problem for elliptic systems for $2 - \varepsilon < p < 2$. In particular, the argument of \cite{DKV}, reproduced in section 6 of \cite{DHM} for systems and hence valid in our setting, gives that $\|\tilde{N}_{2,a}(\nabla u)\|_{L^{p_0}({\BBR}^{n-1})}< \infty$ for $p_0=2 - \varepsilon$ and hence the same is true for $\|\tilde{N}_{p_0,a}(\nabla u)\|_{L^{p_0}({\BBR}^{n-1})}$. The quantity $\varepsilon$ depends on the constant $C_2$ in the $L^2$ norm inequality between the nontangential maximal function and the square function $S_2$. Once we know these quantities are finite, the calculation we did above holds for $p_0$ giving us (\ref{goodgrief3}), and hence the same estimate for $\nabla u$, for $p_0=2-\varepsilon$ and a constant $C_{2-\epsilon}$. The very same extrapolation argument, now invoking the $L^{p_0}$ estimate gives an $L^{p_0 - \varepsilon'}$ estimate where $\varepsilon'$ now depends on $C_{2-\varepsilon}$. In other words, we apply the same argument as \cite{DKV} but starting from known estimates for the nontangential maximal function in $L^{p_0}$ instead of $L^2$. We can continue this bootstrapping as long as we stay in the range of $p$-ellipticity and as long as we can be sure that we are moving by an amount $\varepsilon$ which is not getting smaller at each step. The same argument as given previously implies that we can reach any value $p<2$ in the $p$-ellipticity range of the matrix $A$ in finite number of steps. From this Theorems \ref{S3:T0} follows.\vglue1mm Finally, we remove the temporary assumption that the coefficients are smooth. The key is that the constants in the estimates above depend only on $n,p,\lambda_p,\Lambda,\|\mu\|_{\mathcal C}$ and not on any further degree of smoothness of the coefficients of $\mathcal L$. Hence the classical argument where we approximate our coefficients by smooth functions, and then pass from the smooth coefficient case by taking the limit can be applied. See for example section 4 of \cite{DPcplx} where this is discussed in more detail. \qed\vglue1mm \section{Proof of Theorem \ref{S3:T1}.}\label{S6} The proof is based on the following abstract result \cite{Sh1}, see also \cite[Theorem 3.1]{WZ} for a version on an arbitrary bounded domain. \begin{theorem}\label{th-sh} Let $T$ be a bounded sublinear operator on $L^2({\mathbb R}^{n-1};{\mathbb C}^m)$. Suppose that for some $p>2$, $T$ satisfies the following $L^p$ localization property. For any ball $\Delta=\Delta_d\subset{\mathbb R}^{n-1}$ and $C^\infty$ function $f$ with supp$(f)\subset{\mathbb R}^{n-1}\setminus 3\Delta$ the following estimate holds: \begin{align} &\left(|\Delta|^{-1}\int_\Delta|Tf|^p\,dx'\right)^{1/p}\le\label{eq-pf8}\\ &\qquad C\left\{\left(|2\Delta|^{-1}\int_{2\Delta}|Tf|^2\,dx'\right)^{1/2}+\sup_{\Delta'\supset \Delta}\left(|\Delta'|^{-1}\int_{\Delta'}|f|^2\,dx'\right)^{1/2} \right\},\nonumber \end{align} for some $C>0$ independent of $f$. Then $T$ is bounded on $L^q({\mathbb R}^{n-1};{\mathbb C}^m)$ for any $2\le q<p$. \end{theorem} In our case the role of $T$ is played by the sublinear operator $f\mapsto \tilde{N}_{2,a}(u)$, where $u$ is the solution of the Dirichlet problem ${\mathcal L}u=0$ with boundary data $f$. Clearly, in the Theorem above the factors $2\Delta$, $3\Delta$ do not play significant role. Hence if we establish estimate \eqref{eq-pf8} with $2\Delta$ replaced by $m\Delta$ with $f$ vanishing on $(m+1)\Delta$ for some $m>1$ the claim of the Theorem will remain to hold. Clearly, our operator $T:f\mapsto \tilde{N}_{2,a}(u)$ is sublinear and bounded on $L^2$ by \cite{DPcplx}, for coefficients with small Carleson norm $\mu$. To prove \eqref{eq-pf8} we shall establish the following reverse H\"older inequality, following ideas of Shen \cite{Sh2}. \begin{equation} \left(\frac1{|\Delta|}\int_\Delta|\tilde{N}_{2,a}(u)|^p\,dx'\right)^{1/p}\le\label{eq-pf9} C\left(\frac1{|3\beta m\Delta|}\int_{3\beta m\Delta}|\tilde{N}_{2,a}(u)|^2\,dx'\right)^{1/2}, \end{equation} where $f=u\big|_{\partial{\mathbb R}^n_+}$ vanishes on $4\beta m\Delta$. Here $m$ is determined by Lemma \ref{S6:L1} and $\beta>1$ is determined by a bootstrap argument explained later. Having this by Theorem \ref{th-sh} we have for any $q \in [2,p)$ the estimate \begin{equation}\label{eq-pf10} \|\tilde{N}_{2,a}(u)\|_{L^{q}({\BBR}^{n-1})}\le C\|f\|_{L^{q}({\BBR}^{n-1})}, \end{equation} which implies $L^q$ solvability of the Dirichlet problem for the operator $\mathcal L$. It remains to establish \eqref{eq-pf9}. Let us define \begin{align}\label{eq-pf11} &{\mathcal M}_1(u)(x')=\sup_{y\in\Gamma_a(x')}\{w_2(y):\,\delta(y)\le cd\},\\ &{\mathcal M}_2(u)(x')=\sup_{y\in\Gamma_a(x')}\{w_2(y):\,\delta(y)> cd\}.\nonumber \end{align} where $c=c(a)>0$ is chosen such that for all $x'\in\Delta$ if $y=(y_0,y')\in\Gamma_a(x')$ and $y_0=\delta(y)\le cd$ then $y'\in 2\Delta$. Here $d=\mbox{diam}(\Delta)$ and $w_2$ is the $L^2$ average of $u$ $$w_2(y)=\left(\Xint-_{B_{\delta(y)/2}(y)}|u(z)|^2\,dz\right)^{1/2}.$$ It follows that $$\tilde{N}_{2,a}(u)=\max\{{\mathcal M}_1(u),{\mathcal M}_2(u)\}.$$ We first estimate ${\mathcal M}_2(u)$. Pick any $x'\in\Delta$. For any $y\in\Gamma(x')$ with $\delta(y)>cd$ it follows that for a large subset $A$ of $2\Delta$ (of size comparable to $2\Delta$) we have $$z'\in A\quad\Longrightarrow\quad y\in\Gamma_a(z')\quad\Longrightarrow\quad w_2(y)\le \tilde{N}_{2,a}(u)(z').$$ Hence for any $x'\in\Delta$ $${\mathcal M}_2(u)(x')\le C\left(\frac1{|2\Delta|}\int_{2\Delta} \left[\tilde{N}_{2,a}(u)(z')\right]^2\,dz'\right)^{1/2}.$$ It remains to estimate ${\mathcal M}_1(u)$ on $\Delta$. We write $$u(x_0,x')-u(0,y')=\int_{0}^{1}\frac{\partial u}{\partial s}(sx_0,(1-s)y'+sx')\,ds.$$ Let $K=\{(y_0,y'):y'\in \Delta\mbox{ and }cd<y_0<2cd\}$. Using the previous line and the fact that $u$ vanishes on $3\Delta\subset 4\beta m\Delta$ we have for any $x'\in\Delta$ \begin{equation}\label{eq-pf12} {\mathcal M}_1(u)(x')\le \sup_{K}w_2\,+\,C\int_{2\Delta}\frac{\tilde{N}_{2,a/2}(\nabla u)(y')}{|x'-y'|^{n-2}}dy'. \end{equation} By the fractional integral estimate, this implies that \begin{equation}\label{eq-pf13} \left(\frac1{|\Delta|}\int_\Delta[{\mathcal M}_1(u)(x')]^p\,dx'\right)^{1/p}\le \sup_{K}w_2\,+\,Cd \left(\frac1{|2\Delta|}\int_{2\Delta}[\tilde{N}_{2,a/2}(\nabla u)(x')]^q\,dx'\right)^{1/q}, \end{equation} where $\frac1p=\frac1q-\frac1{n-1}$ and $1<q<n-1$.\vglue1mm To further estimate \eqref{eq-pf13} we use the Lemma \ref{S6:L1}. We claim the following reverse H\"older inequality holds $$\left(\frac1{|\Delta|}\int_{\Delta}[\tilde{N}_{2,a/2}(\nabla u)(x')]^q\,dx'\right)^{1/q}\lesssim \left(\frac1{|\beta \Delta|}\int_{\beta \Delta}[\tilde{N}_{2,a/2}(\nabla u)(x')]^2\,dx'\right)^{1/2},$$ whenever the solution $\mathcal Lu=0$ vanishes on at least $2\beta \Delta$. Let $d$ be the diameter of $\Delta$. We apply Lemma \ref{S6:L1} to the domains (\ref{Odom}) ${\mathcal O}_\tau={\mathcal O}_{\tau\Delta,a}$, for $\tau\in [1,2]$. This gives us \begin{equation}\label{Main-Estlocxx2x} \|\tilde{N}_{a/2} (\nabla u)\|_{L^{q}(\Delta)}\leq C\|\nabla_T f\|_{L^{q}(\partial{\mathcal O_\tau\cap \overline{T(\tau m\Delta)}})}+Cd^{(n-1)/q}\sup_{x\in {\mathcal O}_\tau \cap\{\delta(x)>d\}}W_2(x). \end{equation} Observe that for any $x\in {\mathcal O}_\tau \cap\{\delta(x)>d\}$ we shall have $$|A|=|\{y'\in 2\Delta:x\in\Gamma_{a/2}(y')\}|\approx d^{n-1},$$ and clearly for each $y'\in A$ we have $W_2(x)\lesssim \tilde{N}_{a/2} (\nabla u)(y')$, from which $$W_2(x)\lesssim |A|^{-1}\left(\int_A[\tilde{N}_{a/2} (\nabla u)(y')]^2dy'\right)^{1/2}\lesssim |2\Delta|^{-1}\left(\int_{2\Delta}[\tilde{N}_{a/2} (\nabla u)(y')]^2dy'\right)^{1/2}.$$ It follows \begin{equation}\label{eq-add1} \sup_{x\in \mathcal O_\tau \cap\{\delta(x)>d\}}W_2(x)\lesssim |2\Delta|^{-1}\left(\int_{2\Delta}[\tilde{N}_{a/2} (\nabla u)(y')]^2dy'\right)^{1/2}. \end{equation} We use this in \eqref{Main-Estlocxx2x}, integrate \eqref{Main-Estlocxx2x} in $\tau$ over the interval $[1,2]$ and divide by $d^{(n-1)/q}$. This gives after using the fact that $u=0$ vanishes on at least $4m\Delta$: \begin{align}\label{eq-pf17} &\left(\frac1{|\Delta|}\int_{\Delta}[\tilde{N}_{2,a/2}(\nabla u)(x')]^q\,dx'\right)^{1/q}\\ \lesssim &\quad \left(\frac1{T(2m\Delta)}\iint_{T(2m\Delta)}|\nabla u(x)|^q\,dx\right)^{1/q} +\left(\frac1{|2\Delta|}\int_{2\Delta} \left[\tilde{N}_{2,a}(\nabla u)(x')\right]^2\,dx'\right)^{1/2}.\nonumber \end{align} We have also used the trivial estimate $|\nabla_Tu|\le|\nabla u|$ on $\partial {\mathcal O}_\tau\cap{T(2m\Delta)}$. For the first term we have \begin{align}\label{eq-add2} \iint_{T(2m\Delta)}|\nabla u(x)|^q\,dx&=\iint_{T(2m\Delta)\cap\{x_0<\varepsilon md\}}|\nabla u(x)|^q\,dx\\\nonumber &+\iint_{T(2m\Delta)\cap\{x_0> \varepsilon md\}}|\nabla u(x)|^q\,dx. \end{align} The set $T(2m\Delta)\cap\{x_0> \varepsilon md\}$ in the the interior of ${\mathbb R}^n_+$ of diameter and distance to the boundary that is comparable to $d$. It follows that the interior estimate \eqref{RHthm1} can be used (we only enlarge this set by a small factor $\alpha>1$ so that $\alpha[T(2m\Delta)\cap\{x_0> \varepsilon md\}]$ fully lies in the interior of ${\mathbb R}^n_+$. It follows \begin{align}\label{eq-add3} &\quad\frac1{|T(2m\Delta)|}\iint_{T(2m\Delta)\cap\{x_0> \varepsilon md\}}|\nabla u(x)|^q\,dx \\&\lesssim\left(\frac1{|T(2m\Delta)|}\iint_{\alpha[T(2m\Delta)\cap\{x_0> \varepsilon md\}]}|\nabla u(x)|^2\,dx\right)^{q/2}\nonumber\\\nonumber &\lesssim \left(\frac1{|T(3m\Delta)|}\iint_{T(3m\Delta)}|\nabla u(x)|^2\,dx\right)^{q/2}\\\nonumber &\lesssim \left(\frac1{|3m\Delta|}\int_{3m\Delta}[\tilde{N}_{a/2} (\nabla u)(x')]^2dx'\right)^{q/2}. \end{align} For the term $\iint_{T(2m\Delta)\cap\{x_0<\varepsilon md\}}|\nabla u(x)|^q\,dx$ we use the trivial estimate \begin{align}\label{eq-add4} \iint_{T(2m\Delta)\cap\{x_0<\varepsilon md\}}|\nabla u(x)|^q\,dx\le \varepsilon md \int_{3m\Delta}[\tilde{N}_{a/2} (\nabla u)(x')]^qdx'. \end{align} Combining \eqref{eq-add2}-\eqref{eq-add4} finally yields \begin{align}\label{eq-add5} &\quad\frac1{|T(2m\Delta)|}\iint_{T(2m\Delta)}|\nabla u(x)|^q\,dx\\ &\le \left(\frac{C_\varepsilon}{|3m\Delta|}\int_{3m\Delta}[\tilde{N}_{a/2} (\nabla u)(x')]^2dx'\right)^{q/2} +\frac\varepsilon{|3m\Delta|}\int_{3m\Delta}[\tilde{N}_{a/2} (\nabla u)(x')]^qdx'.\nonumber \end{align} This combined with \eqref{eq-pf17} yields: \begin{align}\label{eq-add6} &\frac1{|\Delta|}\int_{\Delta}[\tilde{N}_{2,a/2}(\nabla u)(x')]^q\,dx'\\ \lesssim &\quad \left(\frac{C_\varepsilon}{|3m\Delta|}\int_{3m\Delta}[\tilde{N}_{a/2} (\nabla u)(x')]^2dx'\right)^{q/2} +\frac\varepsilon{|3m\Delta|}\int_{3m\Delta}[\tilde{N}_{a/2} (\nabla u)(x')]^qdx'.\nonumber \end{align} We now recall an abstract result from \cite[Chapter 5; Proposition 1.1]{Gi}. \begin{theorem}\label{thm-gi} Let $B_R$ be a ball in ${\mathbb R}^N$. Suppose that $g\ge 0$, $g\in L^q(B_R)$ for some $q>1$ and for all $x\in B_{R/2}$ and $0<r<R/16$ we have $$\Xint-_{B_r} g^q\,dx \le C\left(\Xint-_{B_{2r}} g\,dx\right)^q +\theta \Xint-_{B_{2r}}g^q\, dx,$$ for some constants $C>1$, $\theta<1$. Then there exists $\delta=\delta(C,\theta,N,q)>0$ and $K=K(C,\theta,N,q)>0$ such that for all $B_r$ concentric with $B_R$ of radius $0<r<R/4$ we have $$\left(\Xint-_{B_{r/2}} g^{q+\delta}\,dx\right)^{1/(q+\delta)} \le K\left(\Xint-_{B_{r}} g^q\,dx\right)^{1/q} .$$ \end{theorem} Applying this to \eqref{eq-add6} with $g(x')=[\tilde{N}_{a/2} (\nabla u)(x')]^2$ yields that for some $\alpha>1$ we have \begin{align}\label{eq-add7} &\left(\frac1{|\Delta|}\int_{\Delta}[\tilde{N}_{2,a/2}(\nabla u)(x')]^{q+\delta}\,dx'\right)^{1/(q+\delta)} \lesssim \left(\frac{1}{|\alpha\Delta|}\int_{\alpha\Delta}[\tilde{N}_{a/2} (\nabla u)(x')]^q dx'\right)^{1/q}. \end{align} Here clearly, $\delta=\delta(q)$ depends on $q$ but as long as the constant $C_\varepsilon$ in the estimate \eqref{eq-add5} stays uniform (which is for $q\in [p_0+\eta,p_0'-\eta]$ for any $\eta>0$ where $(p_0,p_0')$ is the interval we have $p$-ellipticity) we shall have $$\inf_{q\in [2+\eta,p_0'-\eta]}\delta(q)>0,\qquad\mbox{for all }\eta>0.$$ Here we are avoiding $q$ near $2$ as well since then \eqref{eq-add5} provides no information. However, to get us started in the bootstrap argument we may use the inequality $$\left(\frac1{T(2m\Delta)}\iint_{T(2m\Delta)}|\nabla u|^{2+\delta_0}\,dx\right)^{1/(2+\delta_0)}\lesssim \left(\frac1{T(3m\Delta)}\iint_{T(3m\Delta)}|\nabla u|^2\,dx\right)^{1/2},$$ for some $\delta_0>0$ small which is a well known consequence of the Caccioppoli's inequality and Theorem \ref{thm-gi}. It follows using \eqref{eq-pf17} \begin{align}\nonumber &\left(\frac1{|\Delta|}\int_{\Delta}[\tilde{N}_{2,a/2}(\nabla u)(x')]^{2+\delta_0}\,dx'\right)^{1/(2+\delta_0)}\lesssim \left(\frac1{|3m\Delta|}\int_{3m\Delta} \left[\tilde{N}_{2,a}(\nabla u)(x')\right]^2\,dx'\right)^{1/2}. \end{align} This is the initial inequality in the bootstrap argument after which we iteratively use \eqref{eq-add7} where $\delta>0$ stays bounded away from zero as long as we take $q\le p_0'-\eta$ for some small fixed $\eta>0$. This finally implies that for all $q<p_0'$ we have \begin{align}\label{eq-add8} &\left(\frac1{|\Delta|}\int_{\Delta}[\tilde{N}_{2,a/2}(\nabla u)(x')]^{q}\,dx'\right)^{1/q}\lesssim \left(\frac1{|\beta\Delta|}\int_{\beta \Delta} \left[\tilde{N}_{2,a}(\nabla u)(x')\right]^2\,dx'\right)^{1/2}, \end{align} for some $\beta>1$ with $u$ vanishing on $2\beta\Delta$. The implied constant in the estimate \eqref{eq-add8} gets progressively worse as $q\to p_0'-$. Next, we use again \eqref{eq-pf17} but this time for $q=2$ \begin{align}\label{eq-add9} &\left(\frac1{|\beta\Delta|}\int_{\beta\Delta}[\tilde{N}_{2,a/2}(\nabla u)(x')]^2\,dx'\right)^{1/2}\\ \lesssim &\quad \left(\frac1{T(2\beta m\Delta)}\iint_{T(2\beta m\Delta)}|\nabla u|^2\,dx\right)^{1/2} +\sup_{x\in \mathcal O_{2\beta}\cap\{\delta(x)>d\}}W_2(x),\nonumber \end{align} where we put back $W_2$ instead of our initial estimate \eqref{eq-add1}. For the first term we use the boundary Caccioppoli's inequality \begin{align}\nonumber \left(\frac1{T(2\beta m\Delta)}\iint_{T(2\beta m\Delta)}|\nabla u|^2\,dx\right)^{1/2}&\lesssim d^{-1}\left(\frac1{T(3\beta m\Delta)}\iint_{T(3\beta m\Delta)}|u|^2\,dx\right)^{1/2}\\\nonumber &\lesssim d^{-1}\left(\frac1{|3\beta m\Delta|}\int_{3\beta m\Delta} \left[\tilde{N}_{2,a}(u)(z')\right]^2\,dz'\right)^{1/2},\nonumber \end{align} while for the second term by the interior Ciacciopoli's inequality we have for all $x\in{\mathbb R}^n_+$ with $\delta(x)>d$ $$W_2(x)\le Cd^{-1}w_2(x),$$ where $w_2$ denotes the $L^2$ averages of $u$ (defined earlier). We have intentionally shrunk the size of the ball in the definition of $W_2$ so that this pointwise estimate holds. Since the $x$ we consider in the supremum is in $\mathcal O_{2\beta}$ it then follows \begin{equation}\label{eq-pf16} \sup_{x\in {\mathcal O_{2\beta}}\cap\{\delta(x)>d\}}W_2(x)\lesssim d^{-1}\left(\frac1{|2\beta\Delta|}\int_{2\beta\Delta} \left[\tilde{N}_{2,a}(u)(z')\right]^2\,dz'\right)^{1/2}. \end{equation} Using this and the previous estimates \eqref{eq-add8}-\eqref{eq-add9} then yield for all $q<p_0'$ \begin{align}\label{eq-add10} &\left(\frac1{|\Delta|}\int_{\Delta}[\tilde{N}_{2,a/2}(\nabla u)(x')]^{q}\,dx'\right)^{1/q}\lesssim d^{-1}\left(\frac1{|3\beta m\Delta|}\int_{3\beta m\Delta} \left[\tilde{N}_{2,a}(u)(z')\right]^2\,dz'\right)^{1/2}. \end{align} Finally, inserting this estimate into \eqref{eq-pf13} yields \begin{equation}\label{eq-pf19} \left(\frac1{|\Delta|}\int_\Delta[{\mathcal M}_1(u)(x')]^p\,dx'\right)^{1/p}\le C\left(\frac1{|3\beta m\Delta|}\int_{3\beta m\Delta} \left[\tilde{N}_{2,a}(u)(z')\right]^2\,dz'\right)^{1/2}, \end{equation} where $\frac1p=\frac1q-\frac1{n-1}$ and $1<q<n-1$ such that $A$ is $q$-elliptic and Carleson norm of $\mu$ is small. Since we have assumed $A$ is $q$-elliptic for $q\in (p_0,p_0')$ and $p_0'>2$ this implies in dimensions $2$ and $3$ that we can consider any $2<p<\infty$, while in dimensions $n\ge 4$ we can have $2<p<p_{\max}=p_0'(n-1)/(n-1-p_0')$ when $p_0'<n-1$, $p_{\max}=\infty$ otherwise. Observe that always $p_{max}>2(n-1)/(n-3)$. From this claim of Theorem \ref{S3:T1} follows as we have established \eqref{eq-pf9} for such values of $p$. \vglue1mm \begin{bibdiv} \begin{biblist} \bib{ADN}{article}{ author={Agmon, S.}, author={Douglis, A.}, author={Nirenberg, L.}, title={Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions II}, journal={Comm. Pure and Appl. Math.}, volume={17}, date={1964}, pages={35-92}, } \bib{AAAHK}{article}{ author={Alfonseca, M}, author={Auscher, P.}, author={Axelsson, A}, author={Hofmann, S.}, author={Kim, S.}, title={Analyticity of layer potentials and $L^2$ solvability of boundary value problems for divergence form elliptic equations with complex $L^\infty$ coefficients.}, journal={Adv. Math}, volume={226}, date={2011}, number={5}, pages={4533--4606}, } \bib{AAH}{article}{ author={Auscher, P.}, author={Axelsson, A}, author={Hofmann, S.}, title={Functional calculus of Dirac operators and complex perturbations of Neumann and Dirichlet problems}, journal={J. Func. Anal}, volume={255}, date={2008}, number={2}, pages={374--448}, } \bib{AAM}{article}{ author={Auscher, P.}, author={Axelsson, A.}, author={McIntosh, A.}, title={Solvability of elliptic systems with square integrable boundary data}, journal={Ark. Mat.}, volume={48}, date={2010}, number={2}, pages={253--287}, } \bib{ABBO}{article}{ author={Auscher, P.}, author={Bath\'elemy, L}, author={Ouhabaz, E.}, title={Absence de la $L^\infty$-contractivit\'e pour les semi-groupes associ\'es auz op\'erateurs elliptiques complexes sous forme divergence}, journal={Poten. Anal.}, volume={12}, date={2000}, pages={169--189}, } \bib{AHLMT}{article}{ author={Auscher, P.}, author={Hofmann, S.}, author={Lacey, M.}, author={McIntosh, A.}, author={Tchamitchian, P.}, title={The solution of the Kato square root problem for second order elliptic operators on ${\mathbb R}^n$}, journal={Ann. Mat.}, volume={156}, date={2001}, number={2}, pages={633--654}, } \bib{CD}{article}{ author={Carbonaro, A.}, author={Dragi\v{c}evi\'c, O.}, title={Convexity of power functions and bilinear embedding for divergence-form operators with complex coefficients}, journal={arXiv:1611.00653}, } \bib{CM}{article}{ author={Cialdea, A.}, author={Maz'ya, V.}, title={Criterion for the $L^p$-dissipativity of second order differential operators with complex coefficients}, journal={ J. Math. Pures Appl.}, volume={84}, date={2005}, number={9}, pages={1067--1100}, } \bib{CM1}{article}{ author={Cialdea, A.}, author={Maz'ya, V.}, title={Criteria for the $L^p$-dissipativity of systems of second order differential equations}, journal={Ricc. Mat.}, volume={55}, date={2006}, number={2}, pages={233--265}, } \bib{CM3}{article}{ author={Cialdea, A.}, author={Maz'ya, V.}, title={$L^p$-dissipativity of the Lam\'e operator.}, journal={Mem. Differ. Equ. Math. Phys.}, volume={60}, date={2013}, pages={111--133}, } \bib{CMS}{article}{ author={Coifman, R.}, author={Meyer, Y.}, author={Stein, E.}, title={Some new function spaces and their applications to harmonic analysis}, journal={JFA}, volume={62}, date={1985}, pages={304-335}, } \bib{DK}{article}{ author={Dahlberg, B.}, author={Kenig, C.}, title={Hardy spaces and the Neumann problem in $L^p$ for Laplace's equation in Lipschitz domains}, journal={Annals of Math.}, volume={125}, date={1987}, number ={3}, pages={437-465}, } \bib{DKV}{article}{ author={Dahlberg, B.}, author={Kenig, C.}, author={Verchota, G.}, title={The Dirichlet problem for the biharmonic equation in a Lipschitz domains}, journal={Annales de l'institut Fourier}, volume={36}, date={1986}, number ={3}, pages={109-135}, } \bib{DFM}{article}{ author={David, G.}, author={Feneuil, J.}, author={Mayboroda,S.}, title={Harmonic measure on sets of codimension larger than one.}, journal={preprint, https://arxiv.org/abs/1608.01395}, } \bib{DFM2}{article}{ author={David, G.}, author={Feneuil, J.}, author={Mayboroda,S.}, title={Dahlberg's theorem in higher codimension.}, journal={preprint, https://arxiv.org/abs/1704.00667v1} } \bib{DH}{article}{ author={Dindo{\v{s}}, M.}, author={Sukjung, H.}, title={The Dirichlet boundary problem for second order parabolic operators satisfying Carleson condition}, journal={Rev. Mat. Iberoam.}, volume={34}, number={2}, pages={767--810}, date={2018}, } \bib{DHM}{article}{ author={Dindo{\v{s}}, M.}, author={Sukjung, H.}, author={Mitrea, M.}, title={The $L^p$ Dirichlet boundary problem for second order Elliptic Systems with rough coefficients}, journal={arXiv:1708.02289}, } \bib{DPcplx}{article}{ author={Dindo\v{s}, M.}, author={Pipher, J.}, title={Regularity theory for solutions to second order elliptic operators with complex coefficients and the $L^p$ Dirichlet problem}, journal={arXiv:1612.01568}, } \bib{DPP}{article}{ author={Dindo\v{s}, M.}, author={Petermichl, S.}, author={Pipher, J.}, title={The $L^p$ Dirichlet problem for second order elliptic operators and a $p$-adapted square function}, journal={J. Funct. Anal.}, volume={249}, date={2007}, number={2}, pages={372--392}, } \bib{DPR}{article}{ author={Dindo\v{s}, M.}, author={Pipher, J.}, author={Rule, D.}, title={The boundary value problems for second order elliptic operators satisfying a Carleson condition}, journal={Comm. Pure Appl. Math.}, volume={70}, number={7}, pages={1316--1365}, year={2017}, } \bib{Lan}{article}{ author={Langer, M.}, title={$L^p$-contractivity of semigroups generated by parabolic matrix differential operators}, journal={The Maz\'ya Anniversary Collection, On Maz\'ya?s work in functional analysis, partial differential equations and applications, Birkh\"auser}, volume={1}, date={1999}, number={3}, pages={307--330}, } \bib{FSt}{article}{ author={Fefferman, C.}, author={Stein, E.}, title={$H^p$ spaces of several variables}, journal={Acta Mat.}, volume={129}, date={1972}, pages={137--193}, } \bib{Gi}{book}{ author={Giaquinta, M.}, title={Multiple Integrals in the Calculus of Variations and Nonlinear Elliptic Systems}, series={Annals of Math. Studies}, volume={105}, publisher={Princeton Univ. Press}, year={1983}, } \bib{HKMPreg}{article}{ author={Hofmann, S.}, author={Kenig, C.}, author={Mayboroda, S.}, author={Pipher, J.}, title={The regularity problem for second order elliptic operators with complex-valued bounded measurable coefficients}, journal={Math. Ann.}, volume={361}, date={2015}, issue={3--4}, pages={863--907}, } \bib{HM}{article}{ author={Hofmann, S.}, author={Martell, J.}, title={$L^p$ bounds for Riesz transforms and square roots associated to second order elliptic operators}, journal={Pub. Mat.}, volume={47}, date={2003}, pages={497--515}, } \bib{HMTo}{article}{ author={Hofmann, S.}, author={Martell, J.}, author={Toro, T.} title={$A_\infty$ implies NTA for a class of variable coefficient elliptic operators }, journal={preprint, https://arxiv.org/abs/1611.09561 }, } \bib{KKPT}{article}{ author={Kenig, C.}, author={Koch, H.}, author={Pipher, J.}, author={Toro, T.}, title={A new approach to absolute continuity of elliptic measure, with applications to non-symmetric equations}, journal={Adv. Math.}, volume={153}, date={2000}, number={2}, pages={231--298}, } \bib{KP2}{article}{ author={Kenig, C.}, author={Pipher, J.}, title={The Neumann problem for elliptic equations with nonsmooth coefficients}, journal={Invent. Math.}, volume={113}, date={1993}, number={3}, pages={447--509}, } \bib{KP01}{article}{ author={Kenig, C.}, author={Pipher, J.}, title={The Dirichlet problem for elliptic equations with drift terms}, journal={Publ. Math.}, volume={45}, date={2001}, number={1}, pages={199--217}, } \bib{May}{article}{ author={Mayboroda, S.}, title={The connections between Dirichlet, regularity and Neumann problems for second order elliptic operators with complex bounded measurable coefficients}, journal={Adv. Math.}, volume={225}, date={2010}, number={4}, pages={1786--1819}, } \bib{Sh1}{article}{ author={Shen, Z.}, title={Bounds of Riesz transforms on $L^p$ spaces for second order elliptic operators}, journal={Ann. Inst. Fourier (Grenoble)}, volume={55}, year={2005}, issue={1}, pages={173--197}, } \bib{Sh2}{article}{ author={Shen, Z.}, title={The $L^p$ Dirichlet problem for elliptic systems on Lipschitz domains}, journal={Math. Res. Lett.}, volume={13}, year={2006}, issue={1}, pages={143--159}, } \bib{Tay}{book}{ author={Taylor, E.}, title={Partial Differential Equations I: Basic Theory}, series={Springer}, date={2010}, } \bib{WZ}{article}{ author={Wei, W.}, author={Zhang, Z.}, title={$L^p$ resolvent estimates for variable coefficient elliptic systems on Lipschitz domains}, journal={Anal. Appl. (Singap.)}, volume={13}, year={2015}, issue={6}, pages={591--609}, } \end{biblist} \end{bibdiv} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,407
\section{Introduction} The liquid-ordered ($l_o$) phase of membranes in the presence of cholesterol was brought to the attention of the life science community in 1997 when Simons and Ikonen \cite{Simons:1997} proposed the existence of so-called rafts in biological membranes. Rafts were thought to be small, molecularly organized units, providing local structure in fluid biological membranes and hence furnishing platforms for specific biological functions \cite{Simons:1997,Simons:2000,Engelmann:2005,Niemela:2007,Pike:2009,Lingwood:2010,Eggeling:2009,Goot:2001,Lenne:2009,Simons:2010}. These rafts were supposed to be enriched in cholesterol making them more ordered, thicker and, thus, appropriate anchoring places for certain acylated and hydrophobically-matched integral membrane proteins. The high levels of cholesterol in these rafts led to the proposal that rafts are local manifestations of the $l_o$ phase, although in most cases the nature of the lipid ordering and the phase state were not established in cells, nor in most model membrane studies~\cite{Mouritsen:2010,Simons:2010,Rheinstadter:2013}. Rafts are generally interpreted as some kind of super-particles floating around in an otherwise structureless liquid membrane. However, early work in the physical chemistry of lipid bilayers pointed to the possibility of dynamic heterogeneity \cite{Dibble:1996,Mouritsen:1993,Mouritsen:1994,Mouritsen:1997} in thermodynamic one-phase regions of binary systems. The sources of dynamic heterogeneity are cooperative molecular interactions and thermal fluctuations that lead to density and compositional fluctuations in space and time. A number of ternary phase diagrams have been determined for systems involving cholesterol and two different lipid species. Usually these systems contain a lipid with a high melting point, such as a long-chain saturated phospholipid, and a lipid with a low melting point, such as sphingolipids or unsaturated phospholipids \cite{Marsh:2010}, resulting in the observations of micrometer-sized, thermodynamically stable domains \cite{Niemela:2007,Risselada:2008,Berkowitz:2009,Bennett:2013,Heberle:2013}. Much less work has been done on cholesterol/lipid binary mixtures, which although seemingly simpler, have proven to be more difficult to study. Evidence for a heterogeneous structure of the $l_o$ phase, similar to a microemulsion, with ordered lipid nanodomains in equilibrium with a disordered membrane was recently supported both by theory and experiment. The computational work by Meinhardt, Vink and Schmid~\cite{Meinhardt:2013} and Sodt {\em et al.}~\cite{Sodt:2014} and the experimental papers by Armstrong {\em et al.}~\cite{ArmstrongEBJ:2012,Armstrong:2013,Armstrong:2014} using neutron diffraction were conducted using binary DPPC/cholesterol and DMPC/cholesterol systems. We combined coarse-grained molecular dynamics simulations including 20,000 lipid/cholesterol molecules with neutron diffraction using deuterium labelled cholesterol molecules to study the cholesterol structure in the liquid-ordered phase of DPPC bilayers. The simulations present evidence for a heterogenous membrane structure at 17~mol\% and 60~mol\% cholesterol and the formation of small, transient domains enriched in cholesterol. The molecular structure of the cholesterol molecules within these domains was determined by neutron diffraction at 32.5~mol\% cholesterol. Three structures were observed: (1) A fluid-like structure with strongly bound pairs of cholesterol molecules as manifestation of the liquid-disordered ($l_d$) phase; (2) A highly ordered lipid/cholesterol phase where the lipid/cholesterol complexes condense in a monoclinic structure, in accordance with the umbrella model; and (3) triclinic cholesterol plaques, i.e., cholesterol bilayers coexisting with the lamellar lipid membranes. The simulations use a simple coarse-grained lipid model \cite{SchmidDLW:2007} which reproduces the main phases of DPPC bilayers including the nanostructured ripple phase $P_{\beta'}$ \cite{SchmidDLW:2007} and has similar elastic properties in the fluid phase \cite{West:2009}. In this model, lipids are represented by short linear chains of beads, with a `head bead' and several `tail beads' (Fig.~\ref{fig:simulationsnapshots}~a)), which are surrounded by a structureless solvent. The model was recently extended to binary lipid/cholesterol mixtures. The cholesterol molecules are modelled shorter and stiffer than DPPC, and they have an affinity to phospholipid molecules, reflecting the observation that sterols in bilayers tend to be solubilized by lipids \cite{Lindblom:2009}. In our previous work, we have reported on the behavior of mixed bilayers with small cholesterol content~\cite{Meinhardt:2013}. Locally, phase{-}separation was observed between a $l_o$ and a $l_d$ phase. On large scales, however, the system assumes a two-dimensional microemulsion-type state, where nanometer-sized cholesterol-rich domains are embedded in a $l_d$ environment. These domains are stabilized by a coupling between monolayer curvature and local ordering~\cite{Meinhardt:2013}, suggesting that raft formation is closely related to the formation of ripples in one-component membranes. In the following, we will discuss the behavior of our model membranes at larger cholesterol concentrations and discuss the implications for experiments. \begin{figure} \centering \includegraphics[width=.87\columnwidth,angle=0]{fig1.png} \caption[]{a) Schematic representation of DPPC and cholesterol molecules used in the simulations. b) Snapshot of the simulation at $\mu$=8.5~$k_BT$, resulting in a cholesterol concentration of $\approx$17~mol\%. c) Snapshot of the simulation at $\mu$=7.8~$k_BT$ resulting in a cholesterol concentration of $\approx$60~mol\%.} \label{fig:simulationsnapshots} \end{figure} The simulations were done at constant pressure, constant temperature, and constant zero surface tension in a semi-grandcanonical ensemble where lipids and cholesterol molecules can switch their identity. The cholesterol content is thus driven by a chemical potential parameter $\mu$. Simulation results are given in units of $\sigma\approx 6$~\AA\ \cite{West:2009} and the thermal energy $k_B T$. Typical equilibrated simulation snapshots (side view and top view) are shown in Figs.~\ref{fig:simulationsnapshots}~b) and c). At low cholesterol concentration ($\mu = 8.5~k_BT$), one observes small rafts as discussed earlier. At higher cholesterol concentration (lower $\mu$), the cholesterol-rich rafts grow and gradually fill up the system, but they still remain separated by narrow cholesterol-poor `trenches'. The side view shows that these trenches have the structure of line defects where opposing monolayers are connected. Such line defects are also structural elements of the ripple phase in one-component bilayers \cite{Vries:2005,Lenz:2007}. \begin{figure} \centering \includegraphics[width=1.0\columnwidth,angle=0]{fig2.png} \caption[]{a) Total cholesterol concentration and cholesterol concentration inside rafts for different chemical potential $\mu$. Inset shows a histogram of local cholesterol densities, taken using squares of area $25 \sigma^2 \approx 9~\mbox{nm}^2$. b) Radially averaged two-dimensional lateral structure factor of cholesterol head groups for different $\mu$ as indicated. The level of molecular order increases with decreasing $\mu$, i.e., increasing cholesterol concentration.} \label{fig:simulationresults} \end{figure} With increasing cholesterol concentration, the structure of the rafts changes qualitatively. This is demonstrated in Fig.~\ref{fig:simulationresults}~a), which shows that the cholesterol concentration inside rafts remains constant (around 25\%) for a range of chemical potentials $\mu > 8.5~k_B T$, but then increases rapidly at $\mu \le 8~k_B T$. Along with this concentration increase, the peaks in the lateral structure factor of cholesterol head groups in Fig.~\ref{fig:simulationresults}~b) become more pronounced, indicating a substantial increase in molecular order. We should note that the coarse-grained model used in the simulations is not suitable for studying details of the molecular arrangement inside the ordered structures. However, one can analyze the transition between states with high and low $\mu$ by analyzing the distribution of local cholesterol densities (Fig.~\ref{fig:simulationresults}~a), inset). At high $\mu$, the histogram has a maximum at cholesterol density $c$ close to zero and decays for higher $c$ with a broad tail that reflects the contribution of the rafts. At low $\mu$, it exhibits a marked maximum at $c \approx 1 \sigma^{-2}$, corresponding to bilayer regions consisting purely of cholesterol. In the intermediate regime, corresponding to the situation shown in Fig.~\ref{fig:simulationsnapshots}~c), the histogram of cholesterol densities features two broad peaks around $c {\approx} 0.4 \sigma^{-2}$ and $c{\approx} 0.7 \sigma^{-2}$. In this regime, almost pure cholesterol plaques coexist with regions having cholesterol compositions that are close to those of rafts in cholesterol-poor membranes (high $\mu$ limit in Fig.~\ref{fig:simulationresults}~a)). The experimental observation of the $l_o$ phase in a cholesterol lipid binary mixture was initially reported by Vist and Davis \cite{Vist:1990}. The quantitative determination of binary lipid-cholesterol phase diagrams has remained elusive. In phospholipid membranes, most studies report the $l_o$ phase at cholesterol concentrations of more than 30~mol\% \cite{Marsh:2010}. The formation of cholesterol plaques, phase{-}separated cholesterol bilayers coexisting with the membrane, was reported to occur at $\approx$37.5~mol\% cholesterol in model lipid membranes \cite{Barrett:2013}. That leaves a relatively small range of cholesterol concentrations in the experiment (between about 30-37.5~mol\%), where the $l_o$ phase can be studied. Phase{-}separation may be driven in experiments by certain boundary conditions, not present in computer simulations. The simulations in Fig.~\ref{fig:simulationresults} can, therefore, access a much larger range of cholesterol concentrations and by studying concentrations slightly lower and higher than the experimentally accessible range, the corresponding structures could be emphasized in the computer model. We used neutron diffraction to measure the lateral cholesterol structure in DPPC bilayers containing 32.5~mol\% at $T$=50$^{\circ}$C and a D$_2$O relative humidity of $\approx$100\%, ensuring full hydration of the membranes. Deuterium labeled cholesterol (d7) was used such that the experiment was sensitive to the arrangements of the cholesterol molecules. Schematics of the two molecules are shown in Fig.~\ref{Fig:NeutronInPlane}~a). Highly oriented, solid supported membrane stacks on silicon wafers were prepared, as detailed in the Supplementary Material. The sample was aligned in the neutron beam such that the scattering vector, $\vec{Q}$, was placed in the plane of the membranes (Fig.~\ref{Fig:NeutronInPlane}~b)). This in-plane component of the scattering vector is referred to as $q_{||}$. \begin{SCfigure*} \centering \includegraphics[width=0.7\textwidth,angle=0]{fig3.png} \caption{a) Schematics of DPPC and (deuterated) cholesterol molecules. b) Sketch of the scattering geometry. $q_{||}$ denotes the in-plane component of the scattering vector. c) Diffraction measured at $\lambda$=2.37~\AA\ showing broad, fluid-like peaks. d) Data measured at $\lambda$=1.44 and 1.48~\AA. Several pronounced Bragg peaks are observed in addition to the broad peaks in a). e) Cartoon of the different molecular structures: Pairs of cholesterol molecules in the liquid-disordered regions of the membrane in equilibrium with highly ordered cholesterol structures such as the umbrella structure f) and cholesterol plaques g). An aluminum Bragg peak due to the windows of the humidity chamber and the sample holder is present at $q_{||}$ = 2.68~\AA $^{-1}$. Aluminum forms a face-centered cubic lattice with lattice parameter a= 4.04941~\AA \cite{Witt:1967}. \label{Fig:NeutronInPlane}} \end{SCfigure*} Two setups were used: a conventional high energy and momentum resolution setup using a neutron wavelength of $\lambda$=2.37~\AA\ and a low energy and momentum resolution setup with smaller wavelengths of $\lambda$=1.44 and 1.48~\AA. The latter setup was reported to efficiently integrate over small structures and provide a high spatial resolution capable of detecting small structures and weak signals \cite{Armstrong:2012a,Armstrong:2013,Rheinstadter:2013}. The two setups could be readily switched during the experiment by changing the incoming neutron wavelength, $\lambda$, without altering the state of the membrane sample. Data taken using the conventional setup are shown in Fig.~\ref{Fig:NeutronInPlane}~c) and display a diffraction pattern with broad peaks, typical of a fluid-like structure. Peaks $T_1$, $T_2$ and $T_3$ in Fig.~\ref{Fig:NeutronInPlane}~c) correspond to the hexagonal arrangement of the lipid tails with a unit cell of $a_{\text{lipid}-l_d}=b_{\text{lipid}-l_d}$=5.58~\AA\ and $\gamma$=120$^{\circ}$, in agreement with Armstrong {\em et al.} \cite{Armstrong:2013}. By calculating the (coherent) scattering contributions (Table~S3), cholesterol and lipid molecules contribute almost equally to the scattering in the $l_d$ phase such that the corresponding signals are observed simultaneously in Fig.~\ref{Fig:NeutronInPlane}~c). Peak $H$ agrees well with an average nearest neighbor head group-head group distance of $\approx$8.4~\AA. Peak $C$ only occurs in the presence of deuterated cholesterol molecules. It was, therefore, assigned to a nearest neighbor distance of $\approx$4.6~\AA\ \sout{($\pm$0.2~\AA)}{($\pm$0.5~\AA)} of cholesterol molecules in the $l_d$ phase, i.e., to pairs of strongly bound cholesterol molecules, as shown in Fig.~\ref{Fig:NeutronInPlane}~e). Details of the fitting procedure are given in the Supplementary Material. \begingroup \squeezetable \begin{table} \centering \begin{ruledtabular} \begin{tabular}{c|c|c|c|c|c|c} \multirow{3}{*}{} & \multirow{2}{*}{Amplitude} & \multirow{2}{*}{Center} & \multirow{2}{*}{$\sigma_G$} & \multirow{3}{*}{$l_d$} & monoclinic & triclinic \\ & & & & & cholesterol & cholesterol\\ & (counts) & (\AA$^{-1}$) & (\AA$^{-1}$) & & $l_o$-type structure & plaque\\ \hline \multirow{4}{*}{Fig.~\ref{Fig:NeutronInPlane}~c)} & 62&0.75&0.17& $H$ &&\\ &117&1.360&0.46& $T_1$ &&\\ &27.5&1.360&0.17& $C$ &&\\ &46.6&2.289&0.05& $T_2$ &&\\ &15.0&2.650&0.10& $T_3$ &&\\\hline \multirow{7}{*}{Fig.~\ref{Fig:NeutronInPlane}~d)}&19.8&0.5&0.01&&&[1 0 0]\\ &34.8&0.55&0.01&&[1 $\bar{1}$ 0]\\ &34.3&0.74&0.01&&[1 0 0]&[1 1 0]\\ &33.5&0.98&0.01&&&[2 0 0]\\ &110&1.12&0.01&&[2 $\bar{1}$ 0]&\\ &117.8&1.32&0.01&&[1 1 0]&\\ &60.0&1.61&0.01&&&[1 3 0]\\ \end{tabular} \end{ruledtabular} \caption{Peak parameters of the correlation peaks observed in Fig.~\ref{Fig:NeutronInPlane}~c) and d) and the association with the different cholesterol structures, such as $l_d$, $l_o$-type structure and cholesterol plaque. $H$ and $C$ label the nearest neighbor distances of lipid head groups and cholesterol molecules, respectively; $T_1$, $T_2$, and $T_3$ denote the unit cell of the lipid tails in the $l_d$ regions of the membrane. Peaks were fitted using Gaussian peak profiles and widths are listed as Gaussian widths, $\sigma_G$.} \label{Table:PeakValues} \end{table} \endgroup Several pronounced Bragg peaks are observed at neutron wavelengths of $\lambda$=1.44 and 1.48~\AA\ in Fig.~\ref{Fig:NeutronInPlane}~d) in addition to the broad correlation peaks. Due to the high cholesterol concentration in $l_o$-type structures and plaques and the scattering lengths of DPPC and $d$-cholesterol molecules, the corresponding coherent scattering signal in Fig.~\ref{Fig:NeutronInPlane}~d) is dominated by the deuterated cholesterol molecules. As listed in Table~\ref{Table:PeakValues}, the peak pattern is well described by a superposition of two 2-dimensional structures: a monoclinic unit cell with lattice parameters $a_{\text{chol-lo}}=b_{\text{chol-lo}}$=11~\AA\ and $\gamma$=131$^{\circ}$ and a triclinic unit cell with $a_{\text{chol-plaque}}=b_{\text{chol-plaque}}$=12.8~\AA\ and $\gamma$=95$^{\circ}$ (the values for $\alpha$ and $\beta$ could not be determined from the measurements but were taken from \cite{Rapaport:2001,Barrett:2013} to be $\alpha=91.9^{\circ}$ and $\beta=98.1^{\circ}$). The lipid structure in the $l_o$-type structures in binary DPPC/32.5~mol\% cholesterol bilayers was recently reported by Armstrong {\em et al.} from neutron diffraction using deuterium labelled lipid molecules \cite{Armstrong:2013}. The lipid tails were found in an ordered, gel-like phase organized in a monoclinic unit cell with $a_{\text{lipid-lo}}=b_{\text{lipid-lo}}$=5.2~\AA\ and $\gamma$=130.7$^{\circ}$, as shown in Fig.~\ref{Fig:NeutronInPlane}~f). The cholesterol unit cell determined from the diffraction data in Fig.~\ref{Fig:NeutronInPlane}~c) is indicative of a doubling of the lipid tail unit cell for the cholesterol molecules. The corresponding cholesterol structure consists of cholesterol pairs alternating between two different orientations. The $l_d$ and the $l_o$-type structures can be related to the well-known umbrella model \cite{Huang:1999}, where one lipid molecule is assumed to be capable to `host' two cholesterol molecules, which leads to a maximum cholesterol solubility of 66~mol\% in saturated lipid bilayers. In this scenario the term umbrella model refers to two cholesterol molecules closely interacting with one lipid molecule. Cholesterol plaques, i.e., cholesterol bilayers coexisting with the lamellar membrane phase were reported recently by Barrett {\em et al.}~\cite{Barrett:2013} in model membranes containing high amounts of cholesterol, above 40~mol\% for DMPC and 37.5~mol\% for DPPC. The triclinic peaks in Fig.~\ref{Fig:NeutronInPlane}~d) agree well with the structures published and were, therefore, assigned to cholesterol plaques. Hence both coarse-grained molecular simulations and neutron diffraction data suggest the coexistence of a liquid disordered membrane with two types of highly ordered cholesterol structures: One with some lipid content (Fig.~\ref{Fig:NeutronInPlane}~f), corresponding to the first shoulder in the density histogram at $\mu=7.8 k_B T$ (Fig.~\ref{fig:simulationresults}~a), inset), and one almost exclusively made of cholesterol (Fig.~\ref{Fig:NeutronInPlane}~g), corresponding to the second peak at $\mu = 7.8 k_B T$ in Fig.~\ref{fig:simulationresults}~a). The existence of these structures in the experiment should be robust in binary systems and not depend on, for instance, the sample preparation protocol \cite{Elizondo:2012}. The neutron diffraction data present evidence for pairs of strongly bound cholesterol molecules. We note that the scattering experiment was not sensitive to {\em single} cholesterol molecules, however, the formation of cholesterol dimers with a well defined nearest neighbor distance leads to a corresponding peak in the data in Fig.~\ref{Fig:NeutronInPlane}~c) and d). An attractive force between cholesterol molecules in a POPC bilayer and the formation of cholesterol dimers was reported from MD simulations \cite{Andoh:2012}. Such a force is likely related to the formation of lipid/cholesterol complexes \cite{McConnell:2003} and the umbrella model. However, it is not straightforward to estimate the percentage of dimers from the experiments. A dynamical equilibrium between dimers and monomers is a likely scenario~\cite{Dai:2010}. The dynamic domains observed in this study are not biological rafts, which are thought to be more complex, multi-component structures in biological membranes. In the past, domains have been observed in simple model systems, but only those designed to be `raft-forming' mixtures. In these cases the domains that form are stable equilibrium structures, and are not likely related to the rafts that exist in real cells~\cite{Rheinstadter:2013}. The small and fluctuating domains observed in binary systems may be more closely related to what rafts are thought to be~\cite{Simons:2010}, and are potentially the nuclei that lead to the formation of rafts in biological membranes. The characteristic overall length scale for nanodomains in the simulations is around 20$\sigma$, corresponding to 10-20~nanometers. Both simulations and experiments indicate that there are two types of cholesterol-rich patches coexisting with cholesterol-poor liquid-disordered regions, i.e., ordered $l_o$-type regions containing lipids and cholesterol and cholesterol plaques. The transition between these two is gradual in the coarse-grained simulations. In real membranes, they have different local structure (monoclinic in $l_o$-type regions, triclinic in plaque regions), which may stabilize distinct domains. \acknowledgements This work was supported by the German Science Foundation within the collaborative research center SFB-625. Simulations were carried out at the John von Neumann Institute for Computing (NIC) J\"ulich and the Mogon Cluster at Mainz University. Experiments were funded by the Natural Sciences and Engineering Research Council (NSERC) of Canada, the National Research Council (NRC), the Canada Foundation for Innovation (CFI), and the Ontario Ministry of Economic Development and Innovation. L.T.~is the recipient of a Canada Graduate Scholarship, M.C.R.~is the recipient of an Early Researcher Award from the Province of Ontario.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,716
Home About People Dr Debby Banham MA, PhD Special Supervisor Special Supervisor, Anglo-Saxon, Norse and Celtic Affiliated Lecturer in Palaeography and Anglo-Saxon History, Department of Anglo-Saxon, Norse and Celtic Email: db116@cam.ac.uk Debby was a PhD student at Newnham, working on diet in early medieval England. Since then she has been teaching palaeography (manuscript studies), as well as Anglo-Saxon history and Latin, at Birkbeck College, London, where she retired in 2018. She rejoined Newnham as special supervisor in 2007. Publications include Food and Drink in Anglo-Saxon England (Tempus, 2004) and (with Rosamond Faith) Anglo-Saxon Farms and Farming (OUP, 2014) Social, cultural and economic history of early medieval England, especially medicine, diet and food production, with a sideline in monastic sign language. Current projects include the Early English Bread Project; making and meaning, and a new edition and translation of the earliest medical collection in English. Departmental page Principal, Fellows and Senior Members Fellows Emeritae History of Newnham Architecture and Art The Champneys Buildings Other Principal Buildings Dorothy Garrod Building Finding resources for study Libraries across Cambridge Visiting the Gardens Tour of the Gardens History of the Gardens The Iris Café Other Vacancies conference@newn.cam.ac.uk
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,348
package com.cognifide.aet.job.common.comparators.layout; import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.Matchers.is; import static org.mockito.Mockito.when; import com.cognifide.aet.job.api.comparator.ComparatorProperties; import com.cognifide.aet.job.common.comparators.layout.utils.ImageComparisonResult; import com.cognifide.aet.vs.ArtifactsDAO; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.mockito.Mock; import org.mockito.runners.MockitoJUnitRunner; @RunWith(MockitoJUnitRunner.class) public class LayoutComparatorTest { @Mock private ComparatorProperties comparatorProperties; @Mock private ArtifactsDAO artifactsDAO; @Mock private ImageComparisonResult imageComparisonResult; private LayoutComparator layoutComparator; @Before public void setUp() { //given this.layoutComparator = new LayoutComparator(this.comparatorProperties, this.artifactsDAO); } @Test public void hasMaskThresholdWithAcceptableDifference_withoutThreshold_expectFalse() { //when when(imageComparisonResult.getPercentagePixelDifference()).thenReturn(12.567); when(imageComparisonResult.getPixelDifferenceCount()).thenReturn(300); //then assertThat( this.layoutComparator.hasMaskThresholdWithAcceptableDifference(imageComparisonResult), is(false)); } @Test public void hasMaskThresholdWithAcceptableDifference_withThreshold_expectFalse() { //when when(imageComparisonResult.getPercentagePixelDifference()).thenReturn(12.567); when(imageComparisonResult.getPixelDifferenceCount()).thenReturn(300); this.layoutComparator.setPixelThreshold(299); this.layoutComparator.setPercentageThreshold(null); //then assertThat(this.layoutComparator.hasMaskThresholdWithAcceptableDifference(imageComparisonResult), is(false)); //when this.layoutComparator.setPixelThreshold(null); this.layoutComparator.setPercentageThreshold(12.566); //then assertThat(this.layoutComparator.hasMaskThresholdWithAcceptableDifference(imageComparisonResult), is(false)); } @Test public void hasMaskThresholdWithAcceptableDifference_withThreshold_expectTrue() { //when when(imageComparisonResult.getPercentagePixelDifference()).thenReturn(12.567); when(imageComparisonResult.getPixelDifferenceCount()).thenReturn(300); this.layoutComparator.setPixelThreshold(300); this.layoutComparator.setPercentageThreshold(null); //then assertThat(this.layoutComparator.hasMaskThresholdWithAcceptableDifference(imageComparisonResult), is(true)); //when this.layoutComparator.setPixelThreshold(null); this.layoutComparator.setPercentageThreshold(12.567); //then assertThat(this.layoutComparator.hasMaskThresholdWithAcceptableDifference(imageComparisonResult), is(true)); } @Test public void hasMaskThresholdWithAcceptableDifference_withBothThreshold_expectFalse() { //when when(imageComparisonResult.getPercentagePixelDifference()).thenReturn(12.567); when(imageComparisonResult.getPixelDifferenceCount()).thenReturn(300); this.layoutComparator.setPixelThreshold(299); this.layoutComparator.setPercentageThreshold(30.0); //then assertThat(this.layoutComparator.hasMaskThresholdWithAcceptableDifference(imageComparisonResult), is(false)); //when this.layoutComparator.setPixelThreshold(301); this.layoutComparator.setPercentageThreshold(12.566); //then assertThat(this.layoutComparator.hasMaskThresholdWithAcceptableDifference(imageComparisonResult), is(false)); } @Test public void hasMaskThresholdWithAcceptableDifference_withBothThreshold_expectTrue() { //when when(imageComparisonResult.getPercentagePixelDifference()).thenReturn(12.567); when(imageComparisonResult.getPixelDifferenceCount()).thenReturn(300); this.layoutComparator.setPixelThreshold(300); this.layoutComparator.setPercentageThreshold(12.567); //then assertThat(this.layoutComparator.hasMaskThresholdWithAcceptableDifference(imageComparisonResult), is(true)); } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,712
Individuals are reluctant to approach their home renovating on the grounds that they imagine that home manufacturers are not adequate to satisfy their developing needs in structure and quality versus spending plan. More often than not, individuals begin off their renovating similarly as support of their home however once you begin it, it will end up being an exceptionally muddled and costly assignment that can devour bunches of your time. Subsequently, the correct need and exhaustive arranging are important before beginning anything as far as your home redesigning. There can be two sections of your home redesigning venture, the initial segment you can look to simply fix officially harmed parts of your home and this part is simple and basic since you definitely realize what should be finished. Nonetheless, the second part is the harder one and in you need to consider new increments that you require in your home. These increments can be for a need or an extravagance to make it increasingly helpful for you or just to suit your developing family. In this venture, space is every one of the tallies. The fundamental thought is the manner by which to use appropriately the accessible space. Floor plan can assist you with getting a total and exact thought regarding the accessible space. In floor plan, you will have every one of the parts of your home referenced like kitchen, relax and comparative different parts. Making a rundown is vital and in that rundown, you should include all the essential things that you have to do in your home. Cost estimation comes next and gets past by assessing the expense of materials required, proficient charges for the specialists and some incidental costs, for example, nibble, electrical and water operational expense. On the off chance that you are not showing signs of improvement thought regarding the financial plan, you can likewise counsel somebody proficient for that and he will ensure that you are getting the correct thought of your rebuilding and its cost. An offered guide to this is enlisting home rebuilding administration to the work for you. Basically select an arrangement with cost estimation on it. The sort of rebuilding that you are searching for ought to likewise decide spending plan. On the off chance that you are searching for an increasingly nitty gritty redesigning in which you have to alter diverse parts of your home and you have to include some more parts, you should set up a huge spending plan. Counseling an expert is constantly useful in light of the fact that he will disclose to you novel approaches to manage your renovating and will give you later and upscale thoughts that can enable you to tweak your home structure. The requirement for home redesigning administrations emerges when you basically don't have a clue about some things about carpentry and home building. As what others state, it is best to abandon it to the experts. Remember these things and your home renovating will turn into a not exactly an entangled one for you and you will completely appreciate it since it will assist you with giving your home another look. It isn't constantly about cash yet you can get things done to your home that won't cost you much yet these things will upgrade the general looks of your home. The professional company procontractorservices provides all the information on Nashville kitchen designing.
{ "redpajama_set_name": "RedPajamaC4" }
2,517
package controller; import java.util.List; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpSession; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Controller; import org.springframework.ui.Model; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import dao.BoardDao; import dao.QABoardDao; @Controller public class PageController { @Autowired private BoardDao publicDao; @Autowired private QABoardDao qaDao; @RequestMapping(value = "/main", method = RequestMethod.GET) public String home(Model model, HttpSession session) { String usernum = (String) session.getAttribute("usernum"); List publicBoard = publicDao.getRecentList(); List qaBoard = qaDao.getRecentList(); model.addAttribute("publicBoard", publicBoard); model.addAttribute("qaBoard", qaBoard); model.addAttribute("bottom", "pizzaMain.jsp"); model.addAttribute("usernum", usernum); return "main"; } @RequestMapping(value = "/pizzaOne", method = RequestMethod.GET) public String pizzaOne(Model model) { model.addAttribute("bottom", "pizzaOne.jsp"); return "main"; } @RequestMapping(value = "/pizzaSet", method = RequestMethod.GET) public String pizzaSet(Model model) { model.addAttribute("bottom", "pizzaSet.jsp"); return "main"; } @RequestMapping(value = "/side", method = RequestMethod.GET) public String side(Model model) { model.addAttribute("bottom", "side.jsp"); return "main"; } @RequestMapping(value = "/drink", method = RequestMethod.GET) public String drink(Model model) { model.addAttribute("bottom", "drink.jsp"); return "main"; } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,392
Q: Installing java program via CD I have a program (java jar file) that I want to distribute on CDs. My friend told me that there are free/open-source CD installers available that automatically install your program onto the customer's computer. Now I can't seem to find this on Google. So are there any CD installers that you would recommend that I can use (so I don't need to program one myself). Outline: My program consists of class files, sound files, source files (i'm open source) and images (packaged into a jar file). I only need the installer to work for Windows computers. A: I think IzPack does something like that. A: You can look into Java WebStart which in Java 6 was enhanced to allow "launch-from-cd-and-install-to-harddrive" which mean that it can work as a very simple installer. It requires a JVM already present. You can put the redistributable JRE on the cd too. A: Launch4J is what I have used as my installer. It is really lightweight and has a nice GUI that makes things simple for the developer (one reason I chose not to use IzPack). It makes things dead simple for both the developer and the user. Your jar file is wrapped in a exe launcher. If an up to date JRE is not detected, a bundled JRE is used or the user is prompted to download via java.com/download Really, I couldn't have asked for anything simpler/better. Although you might get more functionality out of IzPack, if you want something dirt quick that can do everything the everyday developer needs, go for Launch4J. P.S. Their splash screen option is a nice bonus :) A: After running into numerous end problems, I finished the job with the use of Inno Setup. Very quick and easy to use. Creates an installer similar to the ones you would see in popular programs. Gives you (and the user) the ability to create Desktop Shortcuts, QuickLaunch Icons and Startup folders. Allows you to add license information etc. Very simple and intuitive interface, I didn't have to read any documentation! A big con: Only makes installers for windows. That met my requirements, but may not work for everyone.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,115
in: Pages using duplicate arguments in template calls, Articles which use infobox templates with no data rows, Comics articles needing issue citations, Lists of Marvel Comics characters List of Marvel Comics characters: Q List of Marvel Comics characters: A - B - C - D - E - F - G - H - I - J - K - L - M - N - O - P - Q - R - S - T - U - V - W - X - Y - Z Quagmire 1 Quantum 2 Quasar 3 Quasimodo 3.1 Other versions of Quasimodo 4 Quicksand 5 Quiet Bill 6 Quentin Quire Quantum is an alien supervillain in the Marvel Universe. Created by Steve Englehart and Al Milgrom, the character first appeared in West Coast Avengers vol. 2, #12 (September 1986). Within the context of the stories, Quantum is an alien soldier from the planet Dakkam, one of the platoon of superpowered Dakkamite troops known as The Elect. The scientists of his race noted that exposure to earth's sun had given one Dakkamite renegade superpowers - they sought to exploit this by placing a platoon of soldiers inside specially designed 'incubator capsules', which were then located close to the sun. Quantum wakes at the end of this treatment to discover that his powers have manifested - but that the rest of The Elect has already gone. Searching for his comrades, he becomes part of a supervillain team assembled by Graviton to resemble the Unified Field Theory. Halflife represents the weak force, Quantum represents the strong force, while Zzzax represents electromagnetism. Graviton himself represents gravity, and promises Quantum that he would help to locate the missing soldiers. Graviton and his allies are defeated by the West Coast Avengers. Quantum, no longer believing Graviton's promises, abandons the team and goes his own way.[1] Quantum finds another superpowered Dakkamite on earth — the Aquarian, the now-pacifist whose powers originally inspired the plan to enhance Dakkamite soldiers. Quantum considers the Aquarian, the son of a renegade, to be a traitor to the fatherworld, so Quantum attempts to kill him. However, Quasar intervenes, saving the Aquarian and, using his abilities to distort Quantum's powers, traps him as a trio of intangible duplicates.[2] The whereabouts of his fellow members of the Elect, imprisoned on the Stranger's laboratory world, are later revealed.[3] Quantum reappears as one of the beings who have been subtly drawn to the planet Godthab Omega by the manipulations of Glorian. This planet is later assaulted by the Annihilation Wave, killing many of the inhabitants.[4] Clay Quartermain Quasimodo is a supervillain in the Marvel Universe, a creation of the Mad Thinker. Created by Stan Lee and Jack Kirby, the character first appeared in Fantastic Four Annual #4 (November 1966). Within the context of the stories, Quasimodo is a Computer created and abandoned by the Mad Thinker.[5] The computer is discovered by the Silver Surfer who, feeling pity for the computer's desire to be human, grants him a partly organic, semi-humanoid cyborg body. Quasimodo becomes enraged by his feeling of inferiority compared to the Silver Surfer's more perfect body, and battles the Silver Surfer. He is rendered immobile by the Surfer.[6] Eventually regaining his mobility, Quasimodo comes into conflict with Captain Marvel,[7] the Beast,[8] Spider-Man and Hawkeye,[9] the Fantastic Four,[10] the Galadorian Spaceknight Rom,[11] and finally the Vision, who expels Quasimodo's consciousness into space.[12] Quasimodo returns to Earth and sets up shop at a base in Cuba during the Dark Reign storyline. It is here that he was captured by S.H.I.E.L.D. at the behest of Norman Osborn. Quasimodo enters Osborn's service as an analyst, compiling dossiers on numerous superhumans.[13] Other versions of Quasimodo Quasimodo appears in the tie-in comic to the animated series The Avengers: Earth's Mightiest Heroes.[issue # needed] Quicksand is a supervillain in the Marvel Universe. Created by Tom DeFalco amd Ron Frenz, the character first appeared in Thor #392 (June 1988). Within the context of the stories, Quicksand is a scientist of Vietnamese descent, working at a nuclear facility. An accident transforms her body into a sand-like substance (like Sandman). Petty and selfish, she calls herself Quicksand and attacks the nuclear reactor in a rage. Despite being confronted by Thor, she succeeds in causing the reactor to meltdown. Thor prevents disaster by using his hammer to transport the entire facility to another dimension, and Quicksand escapes.[14] She is later contacted by Mongoose, on behalf of Count Tagar, who desires a cell sample from Thor in order to create a race of gods. She wants nothing to do with Thor, but is persuaded to battle him once Mongoose presents a device which could temporarily transform her back into human form. Quicksand barely manages to hold her own against Thor, and once Mongoose collects the tissue sample, she escapes once more.[15] Quicksand serves for a time as a member of Superia's Femizons.[16] She is later invited to join the Crimson Cowl's Masters of Evil, and she accepts, hoping to get rich through the Masters' blackmail scheme using global weather control. The team is defeated and apprehended by the Thunderbolts, and Quicksand is among those remanded to custody.[17] During the Civil War storyline, Quicksand is recruited to join the Thunderbolts.[18] Following the Civil War, Quicksand becomes a member of the Initiative's new team for the state of Delaware, the Women Warriors.[19] Quicksand and the rest of the Women Warriors take part in an assault on Asgard during the Siege storyline.[20] Quiet Bill Quiet Bill (William Krimpton) is a mutant in the Marvel Universe, an ally of the X-Man Gambit. First appearing in Gambit vol. 3, #3 (April 1999), the character was created by Fabian Nicieza and Steve Skroce. Within the context of the comics, Quiet Bill is a mutant who can open portals between dimensions. Coming to the notice of the enigmatic New Son, the mutant known as Courier is employed to retrieve Quiet Bill and his friend Huey. The New Son sought to employ Bill's talent to find a dimension to which he could flee.[citation needed] Bill is discovered living in an alley dubbed "Onslaught Alley" by Courier and Gambit. Bill and Huey are trapped between dimensions when the New Son's Crystal Cathedral is destroyed.[citation needed] Quiet Bill loses his powers during the M-Day, and is murdered by Riptide of the Marauders.[21] Quentin Quire ↑ West Coast Avengers Vol.2, #12 (Sep. 1986) ↑ Quasar #4 ↑ Quasar #14-15 ↑ Annihilation: Ronan #3 (August, 2006) ↑ Fantastic Four Annual #4 ↑ Captain Marvel #7 ↑ Amazing Adventures #14 ↑ Marvel Team-Up #22 ↑ Fantastic Four #202 ↑ Rom #42-43 ↑ Avengers #253 ↑ Dark Reign Files #1 (one-shot) ↑ Thor #392-393 ↑ Thor #402 ↑ Captain America #388-390 ↑ Thunderbolts #24-15 ↑ Thunderbolts #103-#104 (August–September 2006) ↑ Avengers: The Initiative #26 ↑ Mike Carey (w), WIKIPEDIA:Scot Eaton (p), Andrew Hennessy (i). "Endangered Species (Part I)" WIKIPEDIA:X-Men v2, 200 (August 2007), Marvel Comics Comics articles needing issue citations
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,669
{"url":"http:\/\/math.stackexchange.com\/questions\/52666\/differentiating-under-integral-sign-trig-counterexample","text":"# Differentiating under integral sign \u2014 trig counterexample\n\nI really hate integration by parts, so when faced with $\\int_{-\\pi}^\\pi x^2 \\cos n x \\, dx$ I tried writing it as $$\\int_{-\\pi}^\\pi x^2 \\cos n x \\, dx = \\frac{d}{dn} \\int_{-\\pi}^\\pi x \\sin n x \\, dx = \\frac{d^2}{dn^2} \\int_{-\\pi}^\\pi \\cos n x \\, dx$$1\n\nI have done something wrong. The integrand is continuously differentiable with respect to n and I thought that was enough. How can I get differentiating under the integral sign to work?\n\n-\nThe last term should have a negative sign, but other than that I don't see what's wrong. This is not a good application of differentiation under the integral sign, though, since generally people only care about integer $n$; I would instead compute $\\int_{-\\pi}^{\\pi} e^{tx} \\cos nx \\, dx$. \u2013\u00a0 Qiaochu Yuan Jul 20 '11 at 15:53\nYour Laplace transform example is complicated. Instead, I should just evaluate the RHS for arbitrary n, even though I just want integer n. This was for my multivariate calculus class, which I am teaching :-\/ \u2013\u00a0 john mangual Jul 20 '11 at 16:09\n\nIt did work (except you are missing a negative sign). Remember $n$ is not always an integer, so that $$-\\int_{-\\pi}^\\pi \\cos(nx)dx=-\\frac{2\\sin(\\pi n)}{n}.$$\n\nThen $$\\frac{d^2}{dn^2} \\left(-\\int_{-\\pi}^\\pi \\cos(nx)dx\\right) =\\frac{d}{dn} \\left( -\\frac{2\\pi\\cos (\\pi n)}{n}+\\frac{2\\sin(\\pi n)}{n^2}\\right)$$\n\n$$=\\frac{2\\pi^2\\sin(\\pi n)}{n}-\\frac{4\\sin(\\pi n)}{n^3}+\\frac{4\\pi\\cos(\\pi n)}{n^2}.$$\n\n-\n\u2022 If $u$ and $v$ are functions of $x$ and dashes denote differentiation and suffixes integration with respect to $x$, then $$\\small\\int uv \\ dx = uv_{1} -u'v_{2}+ u''v_{3} - u'''v_{4} + \\cdots + (-1)^{n-1}u^{(n-1)}v_{n} + (-1)^{n} \\int u^{(n)}\\cdot v_{n} \\ dx$$\nWhat do the $v_n$ stand for in your formula? \u2013\u00a0 Pedro Tamaroff Feb 26 '12 at 17:43\nI read you answer. You clarify $v_n$ stands for integration. \u2013\u00a0 Pedro Tamaroff Feb 27 '12 at 18:03","date":"2014-07-23 00:20:06","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9774911403656006, \"perplexity\": 329.9249343405628}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-23\/segments\/1405997869778.45\/warc\/CC-MAIN-20140722025749-00035-ip-10-33-131-23.ec2.internal.warc.gz\"}"}
null
null
angular.module('Blocker', ['ngMaterial', 'Main', 'MapDrawer']);
{ "redpajama_set_name": "RedPajamaGithub" }
8,336
{"url":"https:\/\/soia.org.pl\/6xp9u\/inverse-element-in-binary-operation-b4a978","text":"operator does boolean inversion, so !0 is 1 and !1 is 0.. then fff has more than one right inverse: let g1(x)=arctan\u2061(x)g_1(x) = \\arctan(x)g1\u200b(x)=arctan(x) and g2(x)=2\u03c0+arctan\u2061(x).g_2(x) = 2\\pi + \\arctan(x).g2\u200b(x)=2\u03c0+arctan(x). Multiplication and division are inverse operations of each other. where $x$ is the inverse we substitute $s_1^{-1}$ (* ) $s_2^{-1}$ for $x$ and we get the inverse and since we have the identity as the result. What is the difference between an Electron, a Tau, and a Muon? It is straightforward to check that this is an associative binary operation with two-sided identity 0.0.0. Now, to find the inverse of the element a, we need to solve. For a binary operation, If a*e = a then element \u2018e\u2019 is known as right identity , or If e*a = a then element \u2018e\u2019 is known as right identity. c = e*c = (b*a)*c = b*(a*c) = b*e = b. Is an inverse element of binary operation unique? Answers: Identity 0; inverse of a: -a. Inverses? Would a lobby-like system of self-governing work? Sign up to read all wikis and quizzes in math, science, and engineering topics. However, in a comparison, any non-false value is treated is true. Thus, the binary operation can be defined as an operation * which is performed on a set A. There is a binary operation given by composition f\u2217g=f\u2218g, f*g = f \\circ g,f\u2217g=f\u2218g, i.e. 0 & \\text{if } \\sin(x) = 0, \\end{cases} For two elements a and b in a set S, a \u2217 b is another element in the set; this condition is called closure. Let S={a,b,c,d},S = \\{a,b,c,d\\},S={a,b,c,d}, and consider the binary operation defined by the following table: Then the inverse of a,a, a, if it exists, is the solution to ab+a+b=0,ab+a+b=0,ab+a+b=0, which is b=\u2212aa+1,b = -\\frac{a}{a+1},b=\u2212a+1a\u200b, but when a=\u22121a=-1a=\u22121 this inverse does not exist; indeed (\u22121)\u2217b=b\u2217(\u22121)=\u22121 (-1)*b = b*(-1) = -1(\u22121)\u2217b=b\u2217(\u22121)=\u22121 for all b.b.b. ,\u2026)... Let Then g1(f(x))=ln\u2061(\u2223ex\u2223)=ln\u2061(ex)=x,g_1\\big(f(x)\\big) = \\ln(|e^x|) = \\ln(e^x) = x,g1\u200b(f(x))=ln(\u2223ex\u2223)=ln(ex)=x, and g2(f(x))=ln\u2061(ex)=x g_2\\big(f(x)\\big) = \\ln(e^x) =x g2\u200b(f(x))=ln(ex)=x because exe^x ex is always positive. Note that the only condition for a binary operation on Sis that for every pair of elements of Stheir result must be de ned and must be an element in S. In particular, 0R0_R0R\u200b never has a multiplicative inverse, because 0\u22c5r=r\u22c50=00 \\cdot r = r \\cdot 0 = 00\u22c5r=r\u22c50=0 for all r\u2208R.r\\in R.r\u2208R. Then. Then f(g1(x))=f(g2(x))=x.f\\big(g_1(x)\\big) = f\\big(g_2(x)\\big) = x.f(g1\u200b(x))=f(g2\u200b(x))=x. So we will now be a little bit more specific. In fact, each element of S is its own inverse, as a\u00e2\u0087\u00a5a \u00e2\u008c\u0098 1 (mod 8) for all a 2 S. Example 12. ... Finding an inverse for a binary operation. f \\colon {\\mathbb R}^\\infty \\to {\\mathbb R}^\\infty.f:R\u221e\u2192R\u221e. For two elements a and b in a set S, a \u00e2\u0088\u0097 b is another element in the set; this condition is called closure. The (two-sided) identity is the identity function i(x)=x. + : R \u00d7 R \u2192 R e is called identity of * if a * e = e * a = a i.e. When you start with any value, then add a number to it and subtract the same number from the result, the value you started with remains unchanged. Theorem 1. Has Section 2 of the 14th amendment ever been enforced? Let Z denote the set of integers. In fact, each element of S is its own inverse, as a\u21e5a \u2318 1 (mod 8) for all a 2 S. Example 12. 29. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. 3 mins read. e notion of binary operation is meaningless without the set on which the operation is defined. If an identity element $e$ exists and $a \\in S$ then $b \\in S$ is said to be the Inverse Element of $a$ if $a * b = e$ and $b * a = e$. The identity element is 0,0,0, so the inverse of any element aaa is \u2212a,-a,\u2212a, as (\u2212a)+a=a+(\u2212a)=0. multiplication 3 x 4 = 12 Then the roots of the equation f(B) = 0 are the right identity elements with respect to Then u(b_1,b_2,b_3,\\ldots) = (b_2,b_3,\\ldots).u(b1\u200b,b2\u200b,b3\u200b,\u2026)=(b2\u200b,b3\u200b,\u2026). Ohhhhh I couldn't see it for some reason, now I completely get it, thank you for helping me =). When you start with any value, then add a number to it and subtract the same number from the result, the value you started with remains unchanged. In such instances, we write $b = a^{-1}$. Facts Equality of left and right inverses. The results of the operation of binary numbers belong to the same set. \\begin{array}{|c|cccc|}\\hline *&a&b&c&d \\\\ \\hline a&a&a&a&a \\\\ b&c&b&d&b \\\\ c&d&c&b&c \\\\ d&a&b&c&d \\\\ \\hline \\end{array} Then the standard addition + is a binary operation on Z. }\\) As $$(a,b)$$ is an element of the Cartesian product $$S\\times S$$ we specify a binary operation as a function from $$S\\times S$$ to \\(S\\text{. An element with a two-sided inverse in $${\\displaystyle S}$$ is called invertible in $${\\displaystyle S}$$. Is there any theoretical problem powering the fan with an electric motor, A word or phrase for people who eat together and share the same food. Let * be a binary operation on IR expressible in the form a * b = a + g(a)f(b) where f and g are real-valued functions. Binary operations 1 Binary operations The essence of algebra is to combine two things and get a third. Note. Hence i=j. Formal definitions In a unital magma. Binary operations: e notion of addition (+) is abstracted to give a binary operation, \u00e2\u0088\u0097 say. The binary operation conjoins any two elements of a set. If is any binary operation with identity, then, so is always invertible, and is equal to its own inverse. Use MathJax to format equations. It sounds as if you did indeed get the first part. Forgot password? Under multiplication modulo 8, every element in S has an inverse. One of its left inverses is the reverse shift operator u(b1,b2,b3,\u2026)=(b2,b3,\u2026). If an element $${\\displaystyle x}$$ is both a left inverse and a right inverse of $${\\displaystyle y}$$, then $${\\displaystyle x}$$ is called a two-sided inverse, or simply an inverse, of $${\\displaystyle y}$$. Now what? practicing and mastering binary table functions. Now, to find the inverse of the element a, we need to solve. Inverse of Binary Operations. Many mathematical structures which arise in algebra involve one or two binary operations which satisfy certain axioms. Assume that i and j are both inverse of some element y in A. 0 = a*b for all b for which we are allowed to divide, Equivalently, (a+b)\/(1 + ab) = 0. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Asking for help, clarification, or responding to other answers. Under multiplication modulo 8, every element in S has an inverse. Set of even numbers: {..., -4, -2, 0, 2, 4, ...} 3. c=e\u2217c=(b\u2217a)\u2217c=b\u2217(a\u2217c)=b\u2217e=b. Let SS S be the set of functions f\u2009\u2063:R\u221e\u2192R\u221e. However, I am not sure if I succeed showing that $t_1 = t_2$, @Z69: Yes, you have: $$t_1=t_1*e=t_1*(s*t_2)=(t_1*s)*t_2=e*t_2=t_2$$. Let be a binary operation on a set X. is associative if is commutative if is an identity for if If has an identity and , then is an inverse for x if The binary operation conjoins any two elements of a set. If yes then how? Hint: Assume that there are two inverses and prove that they have to be the same. Let * be a binary operation on M2 x 2 ( IR ) expressible in the form A * B = A + g(A)f(B) where f and g are functions from M2 x 2 ( IR ) to itself, and the operations on the right hand side are the ordinary matrix operations. Identity Element of Binary Operations. In C, true is represented by 1, and false by 0. I now look at identity and inverse elements for binary operations. Def. In the video in Figure 13.4.1 we say when an element has an inverse with respect to a binary operations and give examples. ($s_1$ (* ) $s_2$) (* ) $x$ = $e$ The ! By clicking \u00e2\u0080\u009cPost Your Answer\u00e2\u0080\u009d, you agree to our terms of service, privacy policy and cookie policy. Find a function with more than one left inverse. First step: $$\\color{crimson}(s_1*s_2\\color{crimson})*(s_2^{-1}*s_1^{-1})=s_1*\\color{crimson}{\\big(}s_2*(s_2^{-1}*s_1^{-1}\\color{crimson}{\\big)}\\;.$$. The elements of N \u00e2\u00a5\u0095 are of course one-dimensional; and to each \u00cf\u0087 in N \u00e2\u00a5\u0095 there is an \u00e2\u0080\u009cinverse\u00e2\u0080\u009d element \u00cf\u0087 \u00e2\u0088\u00921: m \u00e2\u0086\u00a6 \u00cf\u0087(m \u00e2\u0088\u00921) = (\u00cf\u0087(m)) 1 of N \u00e2\u00a5\u0095 Given any \u00cf\u0087 in N \u00e2\u00a5\u0095 N one easily constructs a two-dimensional representation T x of G (in matrix form) as follows: The value of x\u2217y x * y x\u2217y is given by looking up the row with xxx and the column with y.y.y. When a binary operation is performed on two elements in a set and the result is the identity element of the set, with respect to the binary operation, the elements are said to be inverses of each other. 1 is invertible when * is multiplication. Hence i=j. If is an associative binary operation, and an element has both a left and a right inverse with respect to , then the left and right inverse are equal. Let be a set with a binary operation (i.e., a magma note that a magma also has closure under the binary operation). Let be a set with a binary operation (i.e., a magma note that a magma also has closure under the binary operation). We make this into a de nition: De nition 1.1. The questions is: Let S be a set with an associative binary operation (*) and assume that e $\\in$ S is the unit for the operation. Similarly, any other right inverse equals b,b,b, and hence c.c.c. 0 & \\text{if } x \\le 0. \u2217abcd\u200baacda\u200bbabcb\u200bcadbc\u200bdabcd\u200b\u200b The definition in the previous section generalizes the notion of inverse in group relative to the notion of identity. g_2(x) = \\begin{cases} \\ln(x) &\\text{if } x > 0 \\\\ 3 mins read. So every element has a unique left inverse, right inverse, and inverse. If $t_1$ and $t_2$ are both inverses of $s$, calculate $t_1*s*t_2$ in two different ways. A binary operation on a set Sis any mapping from the set of all pairs S S into the set S. A pair (S; ) where Sis a set and is a binary operation on Sis called a groupoid. An element which possesses a (left\/right) inverse is termed (left\/right) invertible. First of the all thanks for answering. For the operation on, the only invertible elements are and. Hint: Assume that there are two inverses and prove that they have to \u2026 a*b = ab+a+b.a\u2217b=ab+a+b. \u25a1_\\square\u25a1\u200b. An element might have no left or right inverse, or it might have different left and right inverses, or it might have more than one of each. Let GGG be a group. Trouble with the numerical evaluation of a series. The function is given by *: A * A \u00e2\u0086\u0092 A. The first example was injective but not surjective, and the second example was surjective but not injective. \u200b Here are some examples. 0 &\\text{if } x= 0 \\end{cases}, Types of Binary Operation. De\u00ef\u00ac\u0081nition. Note \"(* )\" is an arbitrary binary operation The questions is: Let S be a set with an associative binary operation (*) and assume that e $\\in$ S is the unit for the operation. Did the actors in All Creatures Great and Small actually have their hands in the animals? c=e\u2217c=(b\u2217a)\u2217c=b\u2217(a\u2217c)=b\u2217e=b. , then this inverse element is unique. Binary operations on a set are calculations that combine two elements of the set (called operands) to produce another element of the same set. a*b = ab+a+b. When a binary operation is performed on two elements in a set and the result is the identity element of the set, with respect to the binary operation, the elements are said to be inverses of each other. Assume that i and j are both inverse of some element y in A. Related Questions to study The binary operations associate any two elements of a set. In general, the set of elements of RRR with two-sided multiplicative inverses is called R\u2217,R^*,R\u2217, the group of units of R.R.R. Formal definitions In a unital magma. Consider the set S = N[{0} (the set of all non-negative integers) under addition. We de ne a binary operation on Sto be a function b: S S!Son the Cartesian ... at most one identity element for . For example: 2 + 3 = 5 so 5 \u00e2\u0080\u0093 3 = 2. A set S contains at most one identity for the binary operation . An identity element in a set is an element that is special with respect to a binary operation on the set: when an identity element is paired with any element via the operation, it returns that element. More explicitly, let SSS be a set, \u2217*\u2217 a binary operation on S,S,S, and a\u2208S.a\\in S.a\u2208S. Ask Question ... (and so associative) is a reasonable one. In mathematics, a group is a set equipped with a binary operation that combines any two elements to form a third element in such a way that four conditions called group axioms are satisfied, namely closure, associativity, identity and invertibility. Multiplying through by the denominator on both sides gives . 1 Binary Operations Let Sbe a set. Let us take the set of numbers as X on which binary operations will be performed. Identity and inverse elements You should already be familiar with binary operations, and properties of binomial operations. \u2217abcdaaaaabcbdbcdcbcdabcd Suppose that an element a \u00e2\u0088\u0088 S has both a left inverse and a right inverse with respect to a binary operation \u00e2\u0088\u0097 on S. Under what condition are the two inverses equal? Then every element of RRR has a two-sided additive inverse (R(R(R is a group under addition),),), but not every element of RRR has a multiplicative inverse. Now let t t t be the shift operator, t(a1,a2,a3)=(0,a1,a2,a3,\u2026).t(a_1,a_2,a_3) = (0,a_1,a_2,a_3,\\ldots).t(a1\u200b,a2\u200b,a3\u200b)=(0,a1\u200b,a2\u200b,a3\u200b,\u2026). The results of the operation of binary numbers belong to the same set. If every other element has a multiplicative inverse, then RRR is called a division ring, and if RRR is also commutative, then it is called a field. Specifying a list of properties that a binary operation must satisfy will allow us to de ne deep mathematical objects such as groups. \u25a1_\\square\u25a1\u200b. a) Show that the inverse for the element s 1 (*) s 2 is given by s 2 \u2212 1 (*) s 1 \u2212 1 b) Show that every element has at most one inverse. If is a binary operation on A, an element e2Ais an identity element of Aw.r.t if 8a2A; ae= ea= a: EXAMPLE 4. Is it ... Inverses: For each a2Gthere exists an inverse element b2Gsuch that ab= eand ba= e. ,a3 Ask Question ... (and so associative) is a reasonable one. i(x) = x.i(x)=x. Theorems. Why does the Indian PSLV rocket have tiny boosters? Finding an inverse for a binary operation, Non-associative, non-commutative binary operation with a identity element, associative binary operation and unique table, Determining if the binary operation gives a group structure, Set $S= \\mathbb{Q} \\times \\mathbb{Q}^{*}$ with the binary operation $(i,j)\\star (v,w)=(iw+v, jw)$. a. @Z69: You\u00e2\u0080\u0099re welcome. If $${\\displaystyle e}$$ is an identity element of $${\\displaystyle (S,*)}$$ (i.e., S is a unital magma) and $${\\displaystyle a*b=e}$$, then $${\\displaystyle a}$$ is called a left inverse of $${\\displaystyle b}$$ and $${\\displaystyle b}$$ is called a right inverse of $${\\displaystyle a}$$. Log in here. Let R\u221e{\\mathbb R}^{\\infty}R\u221e be the set of sequences (a1,a2,a3,\u2026) (a_1,a_2,a_3,\\ldots) (a1\u200b,a2\u200b,a3\u200b,\u2026) where the aia_iai\u200b are real numbers. An identity element in a set is an element that is special with respect to a binary operation on the set: when an identity element is paired with any element via the operation, it returns that element. Therefore, 0 is the identity element. f\\colon {\\mathbb R} \\to {\\mathbb R}.f:R\u2192R. A binary operation is just like an operation, except that it takes 2 elements, no more, no less, and combines them into one. What mammal most abhors physical violence? f(x)={tan(x)0\u200bif\u00a0sin(x)\ue020\u200b=0if\u00a0sin(x)=0,\u200b Therefore, 0 is the identity element. If a set S contains an identity element e for the binary operation , then an element b S is an inverse of an element a S with respect to if ab = ba = e . The idea is that g1g_1 g1\u200b and g2g_2g2\u200b are the same on positive values, which are in the range of f,f,f, but differ on negative values, which are not. Then y*i=x=y*j. B. Assume that * is an associative binary operation on A with an identity element, say x. For example: 2 + 3 = 5 so 5 \u2013 3 = 2. VIEW MORE. Now, we will perform binary operations such as addition, subtraction, multiplication and division of two sets (a and b) from the set X. The elements of N \u2955 are of course one-dimensional; and to each \u03c7 in N \u2955 there is an \u201cinverse\u201d element \u03c7 \u22121: m \u21a6 \u03c7(m \u22121) = (\u03c7(m)) 1 of N \u2955 Given any \u03c7 in N \u2955 N one easily constructs a two-dimensional representation T x of G (in matrix form) as follows: 5. Examples: 1. A set S contains at most one identity for the binary operation . (-a)+a=a+(-a) = 0.(\u2212a)+a=a+(\u2212a)=0. ( a 1, a 2, a 3, \u2026) More explicitly, let S S S be a set and \u2217 * \u2217 be a binary operation on S. S. S. Then Inverse If a binary operation * on a set A which satisfies a * b = b * a = e, for all a, b \u00e2\u0088\u0088 A. a-1 is invertible if for a * b = b * a= e, a-1 = b. 7 \u2013 1 = 6 so 6 + 1 = 7. For binary operation * : A \u00d7 A \u2192 A with identity element e For element a in A, there is an element b in A such that a * b = e = b * a Then, b is called inverse of a Addition + : R \u00d7 R \u2192 R For element a in A, there is an element b in A such that a * b = e = b * a Then, b is called inverse of a Here, e = 0 for addition Suppose that there is an identity element eee for the operation. S = R The binary operation x * y = e (for all x,y) satisfies your criteria yet not that b=c. The binary operations *\u00a0on a non-empty set A are\u00a0functions from A \u00d7 A to A. G Let be a binary operation on Awith identity e, and let a2A. How to prove $A=R-\\{-1\\}$ and $a*b = a+b+ab$ is a binary operation? 7 \u00e2\u0080\u0093 1 = 6 so 6 + 1 = 7. g2\u200b(x)={ln(x)0\u200bif\u00a0x>0if\u00a0x\u22640.\u200b Is it a group? 1 is an identity element for Z, Q and R w.r.t. G G be a group. Therefore, the inverse of an element is unique when it exists. If f(x)=ex,f(x) = e^x,f(x)=ex, then fff has more than one left inverse: let Which elements have left inverses? a. How to prevent the water from hitting me while sitting on toilet? addition. 11.3 Commutative and associative binary operations Let be a binary operation on a set S. There are a number of interesting properties that a binary operation may or may not have. Set of clothes: {hat, shirt, jacket, pants, ...} 2. a+b = 0, so the inverse of the element a under * is just -a. How many elements of this operation have an inverse?. -1.\u22121. The same argument shows that any other left inverse b\u2032b'b\u2032 must equal c,c,c, and hence b.b.b. Binary operations: e notion of addition (+) is abstracted to give a binary operation, \u2217 say. De nition. Thanks for contributing an answer to Mathematics Stack Exchange! f(x)={tan\u2061(x)if\u00a0sin\u2061(x)\u226000if\u00a0sin\u2061(x)=0, Let $${\\displaystyle S}$$ be a set closed under a binary operation $${\\displaystyle *}$$ (i.e., a magma). An element e is the identity element of a \u00e2\u0088\u0088 A, if a * e = a = e * a. Since ddd is the identity, and b\u2217c=c\u2217a=d\u2217d=d,b*c=c*a=d*d=d,b\u2217c=c\u2217a=d\u2217d=d, it follows that. Let S=RS= \\mathbb RS=R with a\u2217b=ab+a+b. \u200b I now look at identity and inverse elements for binary operations. Let be an associative binary operation on a nonempty set Awith the identity e, and if a2Ahas an inverse element w.r.t. Then the real roots of the equation f(b) = 0 are the right identity elements with respect to * \u2022 Similarly, let * be a binary operation on IR expressible in the form a * b = f(b)g(a) + b. Addition and subtraction are inverse operations of each other. A. Sign up, Existing user? To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Note. 0. A binary operation is an operation that combines two elements of a set to give a single element. It's also possible, albeit less obvious, to generalize the notion of an inverse by dropping the identity element but keeping associativity, i.e., in a semigroup.. Is an inverse element of binary operation unique? Theorem 2.1.13. a \u2217 b = a b + a + b. Consider the set R\\mathbb RR with the binary operation of addition. The identity element for the binary operation * defined by a * b = ab\/2, where a, b are the elements of a \u00e2\u0080\u00a6 Identity Element of Binary Operations. Multiplying through by the denominator on both sides gives . ~1 is 0xfffffffe (-2). ... Finding an inverse for a binary operation.\n\nSpanish Beef Stew With Chorizo, Jobs In Cyprus For Students, Cucina Botanica Granola, A4 Sticker Paper Waterproof, Gunsmith Part Ak 105, Current River Canoe Rental, Ziti With Marinara Sauce And Sweet Peppers, Dr Dennis Gross Ferulic And Retinol Eye Cream,","date":"2021-04-20 22:45:56","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 2, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8572022914886475, \"perplexity\": 475.75069747032586}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618039491784.79\/warc\/CC-MAIN-20210420214346-20210421004346-00135.warc.gz\"}"}
null
null
\section{Introduction} \label{sec:intro} The Galactic extinction map is the most fundamental data for astronomy and cosmology, since all extragalactic astronomical observations are inevitably conducted through the Galactic foreground, thus affected by the Galactic interstellar dust. In particular, lights in optical and ultraviolet bands are dimmed by the absorption and scattering of the Galactic dust. Therefore, we cannot determine any fundamental quantities such as intrinsic luminosities or colors of extragalactic objects without proper correction for the dust extinction. This is why the Galactic extinction correction could be one of the most critical sources of systematics. The most widely-used Galactic extinction map was constructed by \citet[][hereafter SFD]{Schlegel;1998} based on the IRAS/ISSA and COBE/DIRBE Far-infrared (FIR) emission maps, which are dominated by thermal dust emission. The construction of the SFD map consists of the following procedures: \begin{enumerate} \renewcommand{\labelenumi}{(\roman{enumi})} \item constructing a dust temperature map from the ratio of the 100 {\ensuremath{\mu \mathrm{m}}} flux to the 240 {\ensuremath{\mu \mathrm{m}}} flux measured by DIRBE, which has $1^{\circ}.1$ FWHM spatial resolution, \item calibrating the ISSA 100 {\ensuremath{\mu \mathrm{m}}} emission map, which has the resolution of $6^{\prime}.1$ FWHM, according to the DIRBE 100 {\ensuremath{\mu \mathrm{m}}} map, \item correcting the calibrated ISSA 100 {\ensuremath{\mu \mathrm{m}}} map for dust temperature using the previous temperature map, \item converting the ISSA 100 {\ensuremath{\mu \mathrm{m}}} map to color excess, $E(B-V)$, assuming the proportionality between the temperature corrected 100 {\ensuremath{\mu \mathrm{m}}} flux, $I_{100\ensuremath{\mu \mathrm{m}}}$, and the dust column density: \begin{equation} \label{eq:SFD-extinction} E(B-V) = pI_{100\ensuremath{\mu \mathrm{m}}} X(T), \end{equation} where $p$ is a constant determined from {Mg${{\rm I\hspace{-.1em}I}}$ }indices of elliptical galaxies as standard color indicators, and $X(T)$ is the correction for the dust temperature. \end{enumerate} The SFD map has achieved significant improvement in precision and resolution compared to the previous extinction maps constructed from {H\,{\scriptsize I}} 21-${\mathrm{cm}}$ emission \citep{Burstein;1978,Burstein;1982}. Nevertheless, it should be noted that the map is not based on any direct measurement of the dust {\it absorption}, but derived from its {\it emission}. Indeed one needs several assumptions to convert the FIR emission map into the extinction map. This is why it is important to test the reliability of the SFD map by comparing with other independent observations. In high-extinction regions, such as molecular clouds or near the Galactic plane, many earlier studies examined the SFD map using star counts, NIR galaxy colors, and galaxy number counts \citep{Arce;1999a,Arce;1999b,Chen;1999,Cambresy;2001,Cambresy;2005, Dobashi;2005, Yasuda;2007,Rowles;2009}. They often report that the SFD map over-predicts extinction in the high-extinction regions, possibly because of the poor angular resolution of the dust temperature map \citep{Arce;1999a,Arce;1999b} and/or the existence of cold dust components with high emissivity in FIR. In contrast, its reliability in low-extinction regions has not been carefully examined until recently. The Sloan Digital Sky Survey \citep[SDSS;][]{York;2000} with very accurate photometry makes it possible to investigate the reliability of the SFD map even in those regions. \citet{Fukugita;2004} tested the region of $E(B-V) < 0.15$ in the SFD map on the basis of number counts of the SDSS DR1 \citep{Abazajian;2003} galaxies, and concluded that the SFD map prediction is consistent with the number counts. More recently, \citet{Schlafly;2010} measured the dust reddening from the displacement of the bluer edge of the SDSS stellar locus, and found that the SFD map over-predicts dust reddening by $\sim$ 14\% in $E(B-V)$. They also found that the extinction curve of the Galactic dust is better described by the \citet{Fitzpatrick;1999} reddening law rather than that of \citet{O'Donnell;1994}. These results are also confirmed by an independent method \citep{Schlafly;2011}. \citet[][hereafter PG]{Peek;2010} measured the dust reddening using the passively evolving galaxies as color standards and found that the SFD map under-predicts reddening where the dust temperature is low, but at most by 0.045 mag in $E(B-V)$. They provided the correction map for the SFD with $4^{\circ}.5$ resolution. A systematic test of the SFD map was also performed by \citet{Yahata;2007}. They computed the surface number densities of the SDSS DR4 \citep{Adelman;2006} photometric galaxies as a function of the extinction. They found that the surface number densities of the SDSS galaxies exhibit a clear positive correlation with the SFD extinction in the low extinction region, $A_r < 0.1$. They proposed that the observed FIR intensity, $I_{100\ensuremath{\mu \mathrm{m}}}$, is partially contaminated by the emission of galaxies along their direction. Since SFD compute the extinction assuming that the flux is entirely due to the Galactic dust, the region of more galaxies, therefore with stronger FIR intensity, is assigned a higher extinction. If the {\it over-estimated} extinction is applied, the corrected surface number density of galaxies becomes even higher than the real, resulting in the positive correlation with the extinction as observed. \citet{Yahata;2007} performed a simple numerical experiment and showed that even a quite small contamination of FIR emission of galaxies could qualitatively reproduce the observed anomaly. Indeed the expected FIR emission was unambiguously discovered by the subsequent stacking image analysis of SDSS galaxies \citep{Kashiwagi;2013}. The main purpose of the present paper is to reproduce quantitatively the observed anomaly of the surface number density of SDSS galaxies on the SFD map by an analytic model of the contamination due to their FIR emission. The rest of the paper is organized as follows; after the brief summary of the SDSS DR7 data \citep{Abazajian;2009} that we use here (\S \ref{sec:DR7}), we repeat the surface number density analysis of galaxies introduced by \citet{Yahata;2007}. Section \ref{sec:simulation} performs mock numerical simulations so as to predict the surface number densities of galaxies by taking account of the effect of their FIR contamination. We also develop an analytic model, and make sure that it reproduces well the result of the Mock simulation in \S \ref{sec:analytic}. The detailed description of our analytic model is presented in Appendix \ref{app:analytic-detail}. We perform the fit to the observed anomaly in the SFD map and find the mean of the $100\ensuremath{\mu \mathrm{m}}$ to $r$-band luminosity ratio, $y=(\nu L)_{100\ensuremath{\mu \mathrm{m}}}/(\nu L)_r$ per SDSS galaxy, is required to be $y_{\rm avg} > 4$. Section \ref{sec:discussion} discusses the effect of the spatial clustering of galaxies, which is neglected either in mock simulations or in the analytic model. We also compare the optimal value of the $100\ensuremath{\mu \mathrm{m}}$ to $r$-band flux ratio with that independently derived with the stacking image analysis by \citet{Kashiwagi;2013}. Similar analysis for the corrected SFD map according to \citet{Peek;2010} is also briefly mentioned. Finally \S \ref{sec:conclusions} is devoted to summary and conclusions of the present paper. \section{The Sloan Digital Sky Survey DR7} The SDSS DR7 photometric observation covers 11663 $\rm{deg}^2$ of sky area, and collects 357 million objects with photometry in five passbands; $u$, $g$, $r$, $i$, and $z$ \citep[For more details of the photometric data, see][]{Gunn;1998,Gunn;2006,Fukugita;1996, Hogg;2001,Ivezic;2004,Smith;2002,Tucker;2006,Padmanabhan;2008,Pier;2003}. The SDSS photometric data are corrected for the Galactic extinction according to the SFD map \citep{Stoughton;2002}. They adopt the conversion factors from color excess to the dust extinction in each passband: \begin{equation} k_x \equiv \frac{A_{x,{\mathrm{SFD}}}}{{E(B-V)}}, \label{eq:k_x} \end{equation} where $x=u$, $g$, $r$, $i$, and $z$ (Table 6 of SFD). These factors are computed assuming the spectral energy density of an elliptical galaxy, and the reddening law of \citet{O'Donnell;1994} combined with the extinction curve parameter: \begin{equation} R_V\equiv \frac{A_V}{{E(B-V)}}=3.1 . \label{eq:R_V} \end{equation} The spatial distribution of stellar objects in the SDSS catalogue is likely to be correlated with the dust distribution. Therefore the reliable star-galaxy separation is critical for our present purpose of testing the SFD map from the distribution of extragalactic objects. We carefully construct a reliable photometric galaxy sample as follows. \subsection{Sky area selection} We choose the regions of SDSS DR7 survey area labeled ``PRIMARY''. Indeed we found that the ``PRIMARY'' regions in the southern Galactic hemisphere are slightly different from the area where the objects are actually located. We are not able to understand why, and thus decide to use the regions in the northern Galactic hemisphere alone to avoid possible problems. To ensure the quality of good photometric data, we exclude masked regions. The SDSS pipeline defines the five types of masked regions according to the observational conditions. We remove the four types of the masked regions, labeled ``BLEEDING'', ``BRIGHT$\_$STAR'', ``TRAIL'' and ``HOLE'' from our analysis. The masked regions labeled ``SEEING'' is not removed, since relatively bad seeing does not seriously affect the photometry of relatively bright galaxies that we use in the present analysis. The total area of the removed masked regions is about $340~\rm{deg}^2$, which comprises roughly $4.5\%$ of the entire ``PRIMARY'' regions in the northern Galactic hemisphere. \subsection{Removing false objects} \label{subsec:flag-selection} We remove false objects according to photometry processing flags. We first remove fast-moving objects, which are likely the Solar System objects. We also discard objects that have bad photometry or were observed in the poor condition. A fraction of objects suffers from deblending problems, {\em i.e.}, the decomposition of photometry images consisting of superimposed multi-objects is unreliable or failed. We remove such objects as well. \subsection{Magnitude range of galaxies} The SDSS catalogue defines the type of objects according to the differences between the {\it cmodel} and PSF magnitudes, where the former magnitude is computed from the composite flux of the linear combination of the best-fit exponential and de Vaucouleurs profiles. Since the reliability of star-galaxy separation depends on the model magnitude {\it before} extinction correction, we must carefully choose the magnitude ranges of our sample for the analysis. In $r$-band, the star-galaxy separation is known to be reliable for galaxies brighter than $\sim$21 mag \citep{Yasuda;2001, Stoughton;2002}, while the saturation of stellar images typically occurs for objects brighter than 15 mag in $r$-band. Therefore, we choose the magnitude range conservatively as $17.5 < m_r < 19.4$, where $m_r$ denotes the observed (extinction uncorrected) magnitudes in $r$-band. We adopt the same value of upper/lower limits for extinction corrected magnitudes. Figure \ref{fig:magnitude-distribution} shows the differential number counts of SDSS galaxies as a function of $m_x$ for each bandpass. The faint-end threshold of our $r$-band selected sample, $m_r=19.4$, is $\sim 2$ mag brighter than the turnover of the differential number count. We similarly determine the faint-end of magnitude range for all bandpasses as 2 mag brighter than the turnover magnitude. We confirmed that shifting the upper or lower limits by $\pm1.0$mag does not significantly change our conclusions below. We summarize the magnitude range and the number of galaxies with and without photometry flag selection for each bandpass in Table \ref{tab:galaxy-number}. \begin{figure*} \begin{center} \includegraphics[width=0.66\textwidth]{fig1.eps} \end{center} \figcaption{ Differential number counts of the photometric galaxy sample as functions of extinction uncorrected magnitudes for each band (solid lines). The vertical dashed lines indicate the magnitude ranges within which we use for the analysis. \label{fig:magnitude-distribution}} \end{figure*} \begin{table*} \caption{The magnitude range and the number of SDSS galaxies for each bandpass. The third column shows the number of all SDSS galaxies within the magnitude range. The fourth column shows the number of the galaxies after photometry flag selection described in \S \ref{subsec:flag-selection}, which are used in our measurement in \S \ref{sec:DR7}. The numbers of galaxies are counted without extinction correction.} \label{tab:galaxy-number} \begin{center} \begin{tabular}{ccccc} \hline \hline bandpass & magnitude range & \# of galaxies & \# of galaxies & rejection rate \\ & & (w/o flag selection) & (w/ flag selection) & \\ \hline $u$ & $18.3 < m_u < 20.2$ & 1200586 & 633319 & 0.472 \\ $g$ & $18.0 < m_g < 20.4$ & 4891030 & 3428064 & 0.299 \\ $r$ & $17.5 < m_r < 19.4$ & 4347881 & 3205638 & 0.263 \\ $i$ & $17.0 < m_i < 18.9$ & 4450724 & 3140684 & 0.295 \\ $z$ & $16.8 < m_z < 18.3$ & 2984104 & 2136639 & 0.284 \\ \hline \end{tabular} \end{center} \end{table*} \section{Surface number densities of SDSS DR7 photometric galaxies} \label{sec:DR7} \subsection{Methodology} In this section, we extend the previous analysis of \citet{Yahata;2007}, and re-examine the anomaly in the surface number density of galaxies using the SDSS DR7 photometric galaxies, instead of DR4. The left panel of Figure \ref{fig:survey-region} plots the sky area of the SDSS DR7 that is employed in our analysis, where the color scale indicates the value of the $r$-band extinction provided by SFD, $A_{r,{\mathrm{SFD}}}$. Since most of the increased survey area of DR7 relative to DR4 corresponds to regions with $A_{r,\mathrm{SFD}}<0.1$mag, we can study the anomaly in such low-extinction regions discovered by \citet{Yahata;2007} with higher statistical significance. \begin{figure*} \begin{center} \includegraphics[height=0.33\textwidth]{fig2a.eps} \hspace{20pt} \includegraphics[height=0.33\textwidth]{fig2b.eps} \end{center} \figcaption{Photometric survey area of the SDSS DR7 in Galactic coordinates ({\it Left}), and the cumulative distribution of the area as a function of $A_{r,\rm{SFD}}$ ({\it Right}). The left panel is color-coded according to the value of $A_{r,\rm{SFD}}$. The thick lines in the both panels indicate $A_{r,\rm{SFD}}=0.1$mag, corresponding to 74 $\%$ of the entire survey. The thin lines correspond to each bin of 84 subregions color-coded as the same as the left panel. \label{fig:survey-region}} \end{figure*} We first divide the entire sky area of the SDSS DR7 (right panel of Fig.\ref{fig:survey-region}) into 84 subregions according to the value of $A_{r,{\mathrm{SFD}}}$. Each subregion is chosen so as to have an approximately same area ($\sim100\rm{deg}^2$), and consists of spatially separated (disjoint) small patches over the sky. The right panel of Figure \ref{fig:survey-region} shows the cumulative area fraction of the sky as a function of $A_{r,{\mathrm{SFD}}}$. Note that approximately 74 \% of the entire sky corresponds to $A_{r,{\mathrm{SFD}}}<0.1$mag, in which we are interested. Next we count the number of galaxies with the specified range of $r$-band magnitude in each subregion (\S 2.3), and obtain their surface number densities as a function of the extinction. Since the spatial distribution of galaxies is expected to be homogeneous when averaged over a sufficiently large area, the surface number densities of galaxies should be constant, and should not correlate with the extinction. In other words, any systematic trend with respect to $A_{r,{\mathrm{SFD}}}$ should indicate to a problem of the SFD map. \subsection{Results} \label{subsec:results} Figure \ref{fig:photometric-galaxy} shows the surface number densities of galaxies, $S_{\rm{gal}}$, in the 84 subregions for the five passbands. The red filled circles indicate $S_{\rm{gal}}$ uncorrected for dust extinction, while the blue filled triangles are the results after extinction correction using the SFD map. Note that the surface number densities of galaxies in different passpands are plotted against their corresponding $r$-band extinction, $A_{r,{\mathrm{SFD}}}$. Following \citet{Yahata;2007} again, we estimate the statistical error of the surface number density, $\sigma_S^2$, as follows: \begin{equation} \frac{\sigma_S^2}{S^2} = \frac{1}{N} + \frac{1}{\Omega^2} \int_{\Omega} \int_{\Omega} w(\theta_{12}) d\Omega_1 d\Omega_2, \label{eq:error} \end{equation} where $N$ and $S$ denote the number and the surface number density of the galaxies in the subregion of area $\Omega$, and $w(\theta_{12})$ is the angular correlation function of galaxies with $\theta_{12}$ being the angular separation between two solid angle elements, $d\Omega_1$ and $d\Omega_2$. The first term in equation (\ref{eq:error}) denotes the Poisson noise, while the second term comes from galaxy clustering. For definiteness, we adopt the double power-law model \citep{Scranton;2002, Fukugita;2004} for $w(\theta_{12})$: \begin{equation} w(\theta_{12}) = \cases{ 0.008(\theta_{12}/\rm{deg})^{-0.75} & $(\theta_{12} \leq 1 \rm{deg})$ \cr 0.008(\theta_{12}/\rm{deg})^{-2.1} & $(\theta_{12} > 1 \rm{deg})$ \cr }. \label{eq:angular-2PCF} \end{equation} Strictly speaking, the integration in the second term of equation (\ref{eq:error}) should be performed over a complex and disjoint shape of each subregion. For simplicity, however, we substitute the integration over a circular region whose area is equal to that of the actual subregion. Although this approximation may overestimate the true error, it does not affect our conclusion at all. For the typical values of $\Omega\sim 100\rm{deg}^2$ and $S\sim480\rm{deg}^{-2}$, we find that the second term is larger by two orders of magnitude than the first Poisson-noise term. Figure \ref{fig:photometric-galaxy} suggests that the SFD correction works well in relatively high-extinction regions, {\it i.e.,} $A_{r,{\mathrm{SFD}}}>0.1$; before corrected for extinction, the surface number density of galaxy, $S_{\rm{gal}}$, monotonically decreases against $A_{r,{\mathrm{SFD}}}$ as naturally expected. It becomes roughly constant within the statistical error after extinction correction. In low-extinction regions ($A_{r,{\mathrm{SFD}}} < 0.1$), however, the uncorrected $S_{\rm{gal}}$ {\it increases} with $A_{r,{\mathrm{SFD}}}$, which is opposite to the behavior expected from the Galactic dust extinction. The anomalous positive correlation between surface number densities and extinction is even more enhanced after the extinction correction. Apart from the slight quantitative differences, these results are consistent with the trend discovered for the SDSS DR4 by \citet{Yahata;2007}, especially for the positive correlations in $A_{r,\rm SFD}<0.1$. \citet{Yahata;2007} argued that the trend is due to the presence of the FIR emission of galaxies, which contaminates the 100 {\ensuremath{\mu \mathrm{m}}} flux of IRAS that is conventionally ascribed to the Galactic dust entirely. Indeed their hypothesis is now directly confirmed by the stacking analysis of \citet{Kashiwagi;2013}, who detected the unambiguous signature of FIR emission from SDSS galaxies in the SFD map. Our next task, therefore, is to ask if the detected nature of the FIR emission of galaxies by \citet{Kashiwagi;2013} properly accounts for the anomaly that we described here. In what follows, we consider the surface number density of the galaxies measured in $r$-band alone, simply because it is the central SDSS passband, and the result is equally applicable to the other passbands. \begin{figure*} \begin{center} \includegraphics[width=0.8\textwidth]{fig3.eps} \end{center} \figcaption{Surface number densities of the SDSS DR7 photometric galaxy sample corresponding to Figure \ref{fig:magnitude-distribution}, against $A_{r,\mathrm{SFD}}$. The circles/triangles indicates the surface number densities calculated with extinction un-corrected/corrected magnitudes, respectively. The statistical errors are calculated from equation (\ref{eq:error}). The horizontal axis is the mean of $A_{r,{\mathrm{SFD}}}$ over the galaxies in each subregion. \label{fig:photometric-galaxy}} \end{figure*} \section{Mock numerical simulation to compute the FIR contamination effect of galaxies on the extinction map} \label{sec:simulation} In this section, we present the results of mock numerical simulations that take into account the effect of the FIR emission of mock galaxies in a fairly straightforward manner. First we randomly place mock galaxies over the SDSS DR7 sky area so that they have the same number density and the same $r$-band magnitude distribution of the SDSS DR7 sample. Next, we assign a $100{\ensuremath{\mu \mathrm{m}}}$ flux to each mock galaxy according to the probability distribution function discussed in \S \ref{subsec:IRAS_SDSS}. We sum up the $100{\ensuremath{\mu \mathrm{m}}}$ fluxes of the mock galaxies over the raw SFD map that is assumed to be {\it not contaminated} by the FIR emission of mock galaxies, and construct a {\it contaminated} mock extinction map. Finally, we compute the surface number densities of mock galaxies exactly as we did for the real galaxy sample. Further details are described below. \subsection{Empirical correlation between 100$\mu$m and r-band luminosities of PSCz/SDSS galaxies} \label{subsec:IRAS_SDSS} In order to assign 100{\ensuremath{\mu \mathrm{m}}} emission to each mock galaxy with a given $r$-band magnitude, we need an empirical relation between the two luminosities, $L_{100\ensuremath{\mu \mathrm{m}}}$ and $L_{\rm r}$. For that purpose, \citet{Yahata;thesis} created a sample of galaxies detected both in SDSS and in PSCz \citep[IRAS Point Source Catalog Redshift Survey;][]{Saunders;2000}. To be more specific, he searches for SDSS galaxies within 2 arcmin from the position of each PSCz galaxy, and selects the brightest one as the optical counterpart. Approximately 95\% of the PSCz galaxies within the SDSS survey region have SDSS counterparts, and the resulting sample consists of 3304 galaxies in total. Note, however, that the sample is biased towards the FIR luminous galaxies since SDSS optical magnitude-limit is significantly deeper than that of PSCz galaxies. \begin{figure*} \begin{center} \includegraphics[height=0.38\textwidth]{fig4a.eps} \hspace{5mm} \includegraphics[height=0.38\textwidth]{fig4b.eps} \end{center} \figcaption{ {\em left panel}; Relation between $\nu_{100\mu \rm{m}} L_{100\mu\rm{m}}$ and $\nu_r L_r$ for the PSCz/SDSS overlapped galaxies. {\em right panel}; same as the left panel, but for the mock galaxies generated based on $r$-band luminosity function (equation \ref{eq:LFopt}), the log-normal PDF of $y$ adopting the parameters in equation (\ref{eq:y-value-entire}), and the flux cut $f_{\rm 100\ensuremath{\mu \mathrm{m}}}<1.0{\rm Jy}$. \label{fig:nuL-distribution}} \end{figure*} The left panel of Figure \ref{fig:nuL-distribution} shows the relation between $\nu_{100\mu\rm{m}}L_{100\mu\rm{m}}$ (PSCz) and $\nu_r L_r$ (SDSS) of the PSCz/SDSS overlapped sample. For K-correction, we use the ``K-corrections calculator'' service \citep{Chilingarian;2010} for $r$-band, and extrapolate the FIR flux at 100$\ensuremath{\mu \mathrm{m}}$ from the second-order polynomials using 25 and 60 {\ensuremath{\mu \mathrm{m}}} fluxes \citep{Takeuchi;2003}. \begin{figure*} \begin{center} \includegraphics[height=0.4\textwidth]{fig5.eps} \end{center} \figcaption{The probability distribution function of $L_{100\mu \rm{m}}/L_r$; the PSCz/SDSS overlapped sample (histogram), the best-fit log-normal function (black solid curve), flux-limited mock galaxies (red dashed histogram), and the best-fit log-normal function estimated for the entire SDSS galaxies (blue dot dashed curve). \label{fig:histoLL}} \end{figure*} The resulting scatter plot indicates that $L_{100\ensuremath{\mu \mathrm{m}}}$ and $L_r$ are approximately proportional, albeit with considerable scatter. So we compute the probability distribution function (PDF) of the luminosity ratio, \begin{equation} y \equiv \frac{\nu_{100\mu\rm{m}}L_{100\mu \rm{m}}}{\nu_r L_r}, \end{equation} for the sample (solid histogram in Figure \ref{fig:histoLL}), and find that the PDF is reasonably well described by a log-normal distribution: \begin{equation} P_{\mathrm{ratio}}(y)dy = \frac{1}{y \ln 10 \sqrt{2\pi \sigma^2}} \exp \left[ -\frac{(\log_{10}y-\mu)^2}{2\sigma^2}\right] dy, \label{eq:log-normal} \end{equation} where $\mu = 0.393$ and $\sigma =0.428$ are the mean and dispersion of $\log_{10} y$ (solid curve in Figure \ref{fig:histoLL}). Since the PSCz/SDSS overlapped sample is a biased sample in a sense that these galaxies are selected towards the FIR luminous galaxies, the above log-normal distribution is not necessarily applicable for the entire SDSS galaxies. Therefore we assume the FIR-optical luminosity ratio of the {\it entire} SDSS galaxies also follows a log-normal distribution, and estimate the values of $\mu$ and $\sigma$ for the entire sample by considering the PSCz detection limit. Although the flux limit of PSCz is defined through $f_{60{\ensuremath{\mu \mathrm{m}}}}>0.6\rm{Jy}$, we roughly estimate the corresponding effective flux limit at 100{\ensuremath{\mu \mathrm{m}}} is $f_{100{\ensuremath{\mu \mathrm{m}}}}>1.0\rm{Jy}$ from the distribution of $f_{100{\ensuremath{\mu \mathrm{m}}}}$ for the PSCz/SDSS galaxies (Left-panel of Figure \ref{fig:nuL-distribution}). Armed with these assumptions, the number of the galaxies that are detected by this flux cut and have the luminosity between $L_r\sim L_r+dL_r$ and $L_{100{\ensuremath{\mu \mathrm{m}}}}\sim L_{100{\ensuremath{\mu \mathrm{m}}}}+dL_{100{\ensuremath{\mu \mathrm{m}}}}$ is calculated as, \begin{eqnarray} N^{\rm{obs}}&(&L_r, L_{100{\ensuremath{\mu \mathrm{m}}}})dL_r dL_{100{\ensuremath{\mu \mathrm{m}}}} \cr &=& \frac{\Omega_s}{4\pi} \bigg[ \int^{\infty}_0 dz \frac{dV(<z)}{dz} \Theta(L_{100{\ensuremath{\mu \mathrm{m}}}},z) \cr &&\times \Phi(L_r) P(L_{100{\ensuremath{\mu \mathrm{m}}}}|L_r;\mu,\sigma) \bigg] dL_r dL_{100{\ensuremath{\mu \mathrm{m}}}}, \label{eq:Nobs} \end{eqnarray} where $\Omega_s$ is the solid angle of the PSCz/SDSS overlapped survey area, and $V(<z)$ denotes the co-moving volume up to redshift $z$. The step function $\Theta(L_{100{\ensuremath{\mu \mathrm{m}}}},z)$ describes the flux cut of PSCz: \begin{equation} \Theta(L_{100{\ensuremath{\mu \mathrm{m}}}},z) = \cases{ 1 & $(L_{100{\ensuremath{\mu \mathrm{m}}}}/4\pi d^2_L(z)>1.0{\rm Jy}$) \cr 0 & $(\rm{else})$ \cr }, \label{eq:ThetaL100} \end{equation} where $d_L(z)$ is the luminosity distance at redshift $z$. We adopt the double-Schechter luminosity function in $r$-band measured from the SDSS DR2 data \citep{Blanton;2005} for $\Phi(L_r)$: \begin{eqnarray} \Phi (L_r) dL_r &=& \frac{dL_r}{L_{r,\ast}} \exp \left( -\frac{L_r}{L_{r,\ast}}\right) \cr &\times& \left[ \phi_{\ast,1} \left(\frac{L_r}{L_{r,\ast}}\right)^{\alpha_1} + \phi_{\ast,2}\left(\frac{L_r}{L_{r,\ast}}\right)^{\alpha_2}\right]. \label{eq:LFopt} \end{eqnarray} The conditional probability density function of $L_{100{\ensuremath{\mu \mathrm{m}}}}$ for given $L_r$ is assumed to be log-normal: \begin{eqnarray} &&P(L_{100{\ensuremath{\mu \mathrm{m}}}}|L_r;\mu,\sigma)dL_{100{\ensuremath{\mu \mathrm{m}}}} =\frac{1}{\ln 10\sqrt{2\pi \sigma^2}} \cr &\times& \exp\left(-\frac{[\log (\nu_{100\ensuremath{\mu \mathrm{m}}}L_{100{\ensuremath{\mu \mathrm{m}}}}/\nu_r L_r)-\mu]^2}{2\sigma^2} \right) \frac{dL_{100{\ensuremath{\mu \mathrm{m}}}}}{L_{100{\ensuremath{\mu \mathrm{m}}}}} \cr &=& yP_{\rm ratio}(y;\mu,\sigma)\frac{dL_{100\ensuremath{\mu \mathrm{m}}}}{L_{100\ensuremath{\mu \mathrm{m}}}}. \label{eq:conditional} \end{eqnarray} We use equation (\ref{eq:Nobs}) to find the best-fit $\mu$ and $\sigma$ in equation (\ref{eq:conditional}) for the {\it entire} SDSS galaxies that reproduce the observed distribution of the PSCz/SDSS overlapped sample. The resulting values are $\mu=-0.662$ and $\sigma=0.559$ as plotted in blue dot-dashed line in Figure \ref{fig:histoLL}. This result indicates that the mean value of $y$ of the PSCz/SDSS overlapped sample is biased by an order of magnitude relative to that for the entire galaxies; see equation (\ref{eq:y-value-pscz}) and (\ref{eq:y-value-entire}). Adopting now the best-fit log-normal distribution, the luminosity function at $100{\ensuremath{\mu \mathrm{m}}}$ is calculated as \begin{equation} \label{eq:LF_FIR} \Phi(L_{100{\ensuremath{\mu \mathrm{m}}}}) = \int^{\infty}_0 dL_r\Phi(L_r) P(L_{100{\ensuremath{\mu \mathrm{m}}}}|L_r;\mu,\sigma). \end{equation} As plotted in Figure \ref{fig:LF}, the above best-fit indeed agrees well with the luminosity function independently measured from the PSCz data \citep{Serjeant;2005}. \begin{figure*} \begin{center} \includegraphics[height=0.32\textwidth]{fig6.eps} \end{center} \figcaption{Luminosity function (LF) of galaxies at $100\ensuremath{\mu \mathrm{m}}$ and $r$-band. Solid line is 100$\ensuremath{\mu \mathrm{m}}$ LF directly measured from the PSCz data \citep{Serjeant;2005}, while dashed line shows our estimate of 100$\ensuremath{\mu \mathrm{m}}$ LF based on equation (\ref{eq:LF_FIR}) with the best-fit $\mu$, $\sigma$ and $r$-band LF \citep[blue dotted line]{Blanton;2005}. \label{fig:LF}} \end{figure*} In order to make sure if the above FIR log-normal PDF combined with the FIR flux cut reproduces the left panel of Figure \ref{fig:nuL-distribution}, we generate mock galaxies and assign $z$, $L_r$, and $L_{100{\ensuremath{\mu \mathrm{m}}}}$ following the redshift distribution $dV(<z)$, and equations (\ref{eq:LFopt}) and (\ref{eq:conditional}). Then we exclude those mock galaxies with $f_{100{\ensuremath{\mu \mathrm{m}}}}<1.0\rm{Jy}$ to mimic the flux cut. The right panel of Figure \ref{fig:nuL-distribution} and the dashed histogram in Figure \ref{fig:histoLL} show the resulting luminosity distribution and the PDF of $y$ for those mock galaxies. Although not perfect, the mock galaxies reproduce the observed distribution reasonably well. We suspect that the discrepancy between the observed data and the mock simulation is mainly due to the limitation of our log-normal approximation neglecting the dependence of the ratio $L_{100{\ensuremath{\mu \mathrm{m}}}}/L_{60{\ensuremath{\mu \mathrm{m}}}}$ on $L_{100{\ensuremath{\mu \mathrm{m}}}}$. For simplicity of the procedure, however, we adopt the best-fit log-normal distribution as the fiducial model of the 100 {\ensuremath{\mu \mathrm{m}}} flux of the SDSS galaxies in what follows. In doing so, we parametrize the distribution by $y_{\rm avg}$ and $y_{\rm rms}$ instead of $\mu$ and $\sigma$: \begin{eqnarray} y_{\rm avg} &=& e^{\mu\ln 10+ (\sigma \ln 10)^2/2}, \\ y_{\rm rms} &=& e^{\mu \ln 10 +(\sigma \ln 10)^2/2} \sqrt{e^{(\sigma \ln 10)^2}-1} , \end{eqnarray} since the anomaly is basically determined by $y_{\rm avg}$ as will be shown in Figure \ref{fig:various-parameters} below. For definiteness, the PSCz/SDSS overlapped sample is characterized by \begin{equation} \mu=0.393, \sigma=0.428, y_{\rm avg}=4.015, y_{\rm rms}=5.143, \label{eq:y-value-pscz} \end{equation} while the entire SDSS sample is estimated to have \begin{equation} \mu=-0.662, \sigma=0.559, y_{\rm avg}=0.499, y_{\rm rms}=1.026. \label{eq:y-value-entire} \end{equation} \subsection{Simulations \label{subsec:poisson}} Now we are in a position to present our mock simulations that exhibit the effect of the FIR contamination of galaxies. In this subsection, we neglect the spatial clustering of galaxies and consider the case for Poisson distributed mock galaxies. The effect of spatial clustering of galaxies will be discussed separately in \S \ref{subsec:N-body}. Our mock simulations are performed as follows. \begin{enumerate} \item We distribute random particles as mock galaxies over the SDSS DR7 survey area. The number of the mock galaxies is adjusted so as to approximately match that of the SDSS photometric galaxies. \item We assign an intrinsic apparent magnitude in $r$-band to each mock galaxy so that the resulting magnitude distribution reproduces that of the SDSS galaxies (Figure \ref{fig:magnitude-distribution}). \item Assign $100\mu\mathrm{m}$ flux to each mock galaxy adopting the log-normal PDF for the $100\ensuremath{\mu \mathrm{m}}$-to-$r$-band flux ratio, $y$. The PDF is characterized by $y_{\rm avg}$ and $y_{\rm rms}$. \item We convolve the $100{\ensuremath{\mu \mathrm{m}}}$ fluxes of the mock galaxies with a $\rm{FWHM} = 5'.2$ Gaussian filter, so as to mimic the SFD resolution, $\rm{FWHM} = 6'.1$ (see also Appendix \ref{app:psf}). Those mock galaxies with 100 {\ensuremath{\mu \mathrm{m}}} flux being larger than $1.0\rm{Jy}$ are excluded, since SFD individually subtracted the 100{\ensuremath{\mu \mathrm{m}}} emission of those bright galaxies. We include only the contribution of the mock galaxies with $17.5<m_{\rm r}<19.4$ so as to be consistent with our analysis in \S \ref{subsec:results}. We note, however, that in reality the FIR contamination would be likely contributed by galaxies outside the magnitude range (not only SDSS galaxies but non-SDSS galaxies that do not satisfy the SDSS selection criteria). Therefore the current mock simulation should be interpreted to see the extent to which the SDSS galaxies in that magnitude range alone account for the observed anomaly in their surface number density. \item We superimpose the 100{\ensuremath{\mu \mathrm{m}}} intensity of the mock galaxies on a true extinction map and construct a contaminated extinction map after subtracting the background ({\it i.e.,} mean) level of the mock galaxy emission. In what follows, the resulting extinction with mock galaxy contaminated is denoted as $A_r^{\prime}$. \item Finally, we calculate $S_{\rm{mock}}$, surface number densities of mock galaxies whose corrected/uncorrected magnitudes lie between 17.5 and 19.4 mag, repeating the same procedure discussed in \S\ref{sec:DR7}, but using $A_r^{\prime}$ instead. \end{enumerate} Note that our mock analysis uses the SFD map as the true extinction map without being contaminated by FIR emission of {\it mock} galaxies. Of course, the SFD map is contaminated by FIR emission from {\it real} galaxies, and thus cannot be regarded as a {\em true} extinction map for them. Nevertheless the contamination of real galaxies should not be correlated at all with the mock galaxies. This is why the SFD map can be used as the true extinction map for the current simulation. The {\it observed} magnitude of each mock galaxy, {\it i.e.,} affected by the Galactic dust absorption alone, is calculated from the true, in the present case the SFD map, but the extinction correction is done using $A^{\prime}_r$. Note that the difference between the true map and the contaminated map affects the value of extinction of regions where mock galaxies are located. Therefore, surface number densities of mock galaxies {\it before} the extinction correction are also influenced by the FIR contamination. Figure \ref{fig:fiducial-parameters} shows the surface number densities of mock galaxies as a function of $A_r^{\prime}$. Here we adopt $y_{\rm avg}=0.499$ and $y_{\rm rms}=1.026$, i.e., equation (\ref{eq:y-value-entire}) which are estimated for the entire SDSS galaxy sample. The quoted error bars in the panel reflect the Poisson noise alone. The results exhibit a similar, but significantly weak correlation with $A_{r,{\rm SFD}}$ at $A_{r,{\rm SFD}}<0.1$ compared to the observed one (Fig.\ref{fig:photometric-galaxy}), especially for the extinction-uncorrected surface densities. \begin{figure*} \begin{center} \includegraphics[width=0.33\textwidth]{fig7.eps} \end{center} \figcaption{The surface number densities of the randomly distributed mock galaxies with assigned magnitude of $17.5 < m_r < 19.4$. The symbols are the same as in Figure \ref{fig:photometric-galaxy}. The values of $y_{\rm avg}$ and $y_{\rm rms}$ estimated for the entire SDSS galaxies are adopted, instead of those for the PSCz/SDSS overlapped sample. The error bars reflect the Poisson noise alone. \label{fig:fiducial-parameters}} \end{figure*} Figure \ref{fig:distribution} would help us to understand the origin of the anomaly intuitively. (In this plot, we have adopted $y_{\rm avg}=10$ and $y_{\rm rms}=5$ just to clearly visualize the trends discussed in the following.) The dashed line indicates the differential distribution of the sky area as a function of $A_{r,\mathrm{SFD}}$, $\Omega(A_{r,\mathrm{SFD}})$, which corresponds to the derivative of the left panel of Figure \ref{fig:survey-region}. The black solid line shows the same distribution, but as a function of $A_r^{\prime}$. The resulting $\Omega^{\prime}(A^{\prime}_r)$ slightly differs from $\Omega(A_{r,\mathrm{SFD}})$ due to the FIR contamination of mock galaxies. The blue and red solid lines in Figure \ref{fig:distribution} show the differential number counts of galaxies, $N^{\prime}_{\rm gal,uncorr}$ and $N^{\prime}_{\rm gal,corr}$, as a function of $A_r^{\prime}$ calculated from magnitudes uncorrected/corrected for extinction with $A'_r$. The shapes of $N^{\prime}_{\rm gal,uncorr}$ and $N^{\prime}_{\rm gal, corr}$ are slightly shifted towards the right relative to $\Omega^{\prime}(A_r^{\prime})$, because the pixels with more galaxies suffer from the larger contamination and thus have larger values of $A_r^{\prime}$. Although the amount of this shift is quite small on average, the differences between $\Omega^{\prime}$ and the differential number counts for the same $A_r^{\prime}$ become larger in low-extinction regions because $\Omega^{\prime}$ is a rapidly increasing function of $A_r^{\prime}$. Therefore the surface number densities, $N^{\prime}_{\rm gal,uncorr}$ or $N^{\prime}_{\rm gal,corr}$ divided by $\Omega^{\prime}$, drastically change especially in low-extinction regions. In other words, the correlation between the surface number densities and $A_r^{\prime}$ is significantly enhanced due to the nature of the SDSS sky area and the SFD map. This also implies that the shape of the anomaly in $S_{\rm gal}$ is basically determined by the functional form of $\Omega(<A)$. \begin{figure*} \begin{center} \includegraphics[height=0.4\textheight]{fig8.eps} \end{center} \figcaption{The distribution of sky area and mock galaxies. The dashed line is the distribution of sky area as a function of {\em true} extinction, $A$, and the solid black line is calculated as a function of contaminated extinction, $A+\Delta A$. The red (blue) line indicates the distribution of number of galaxies as a function of contaminated extinction, $A+\Delta A$, with uncorrected (corrected) using the contaminated extinction. The distributions of number of galaxies are divided by the average surface number density, therefore surface number densities are equal to the average at the points where the distribution of sky area and number of galaxies cross. We have adopted $y_{\rm avg}=10$ and $y_{\rm rms}=5$ for clear visualization of the differences between each lines. \label{fig:distribution}} \end{figure*} We also investigate how this result is affected by the $100\ensuremath{\mu \mathrm{m}}$ emission of galaxies outside the magnitude range. We incorporate the $100{\ensuremath{\mu \mathrm{m}}}$ flux of mock galaxies within a wider magnitude range ($15.0 < m_r < 21.0$), but the result is almost indistinguishable. This is mainly because that the additional contamination is not directly correlated with the surface number densities that we measure, partly because we neglect spatial clustering of galaxies. Therefore it affects only as the statistical noise in the extinction map, and does not contribute to the systematic correlation. Finally we examine the dependence of the surface number densities on the parameters of $y_{\rm avg}$ and $y_{\mathrm{rms}}$ for log-normal PDF of $y$ (Fig. \ref{fig:various-parameters}). The results indicate stronger correlations for larger $y_{\rm avg}$, but turn out to be relatively insensitive to $y_{\rm{rms}}$. This is why we choose $y_{\rm avg}$ and $y_{\rm rms}$, instead of $\mu$ and $\sigma$, to parametrize the log-normal PDF. A closer look reveals that larger $y_{\rm rms}$ shows slightly weaker anomaly, since a larger fraction of the mock galaxies are brighter than the IRAS/PSCz flux limit and does not contribute to FIR contamination. This effect of flux limit becomes critical for very large $y_{\rm avg}$ and $y_{\rm rms}$, as we will see in \S \ref{subsec:fitting}. As seen above, the mock result adopting equation (\ref{eq:y-value-entire}) estimated for the {\it entire} SDSS galaxies (Fig \ref{fig:fiducial-parameters}) indicates disagreement with the observed anomaly (Fig \ref{fig:photometric-galaxy}). This result may appear to imply that the hypothesis of galaxy FIR contamination fails to explain the observed anomaly. This is, however, not the case because we have neglected spatial clustering of galaxies. The previous parameters for the entire SDSS are estimated from the contribution of each single galaxy itself, but in the presence of galaxy clustering, the FIR emission associated with that galaxies can be significantly enhanced by the neighbor galaxies. In fact, the stacking analysis on the SFD map revealed that the FIR emission of neighbor galaxies dominate the central galaxy even by an order of magnitude \citep{Kashiwagi;2013}. Therefore, we should adopt $y_{\rm avg}$ and $y_{\rm rms}$ that represent the total contribution both for each single galaxy and clustering neighbor galaxies, in order to reproduce the observed anomaly by our Poisson mock simulation. In principle, we can probe such FIR fluxes from the comparison between mock simulations and observations, but the simulations are very time-consuming. Thus we develop an analytic model that reproduces the mock results in the next section. \begin{figure*} \begin{center} \includegraphics[width=0.6\textwidth]{fig9.eps} \end{center} \figcaption{The results of the mock simulations with Poisson distributed sample for various parameters of the log-normal PDF of $y$. The symbols indicate the results of the simulation for the mock Poisson sample, the same as Figure \ref{fig:fiducial-parameters}. The error bars reflect the Poisson noise alone. The cyan and pink lines indicate the analytic model prediction from equations (\ref{eq:surface-number-density-after}) and (\ref{eq:surface-number-density-before}) in \S \ref{sec:analytic}. The lines and symbols are the same as Figure \ref{fig:fiducial-parameters}. The goodness of agreement between Poisson mock simulation and analytic model are evaluated by reduced $\chi^2$ for extinction un-corrected/corrected one, where only Poisson noise is considered. For all panels, the same average surface number density, $\bar{S}=480 \mathrm{deg}^{-2}$, is assumed and shown as gray dashed lines. \label{fig:various-parameters}} \end{figure*} \section{Analytic model of the FIR contamination} \label{sec:analytic} In this section, we develop an analytic model that describes the anomaly of surface number densities of galaxies due to their FIR emission. The reliability of the analytic model is checked against the result of the numerical simulations presented in the previous section. We present a brief outline in the next subsection, and the details are described in Appendix \ref{app:analytic-detail}. \subsection{Outline} \label{subsec:outline} Let $A$ define the {\em true} Galactic extinction, not contaminated by the galaxy emissions. We denote the sky area whose value of the {\em true} extinction is between $A$ and $A+dA$ by $\Omega (A)dA$, and the number of galaxies that are located in the area $\Omega (A)dA$ by $N_{\mathrm{gal}}(A)dA$. Since there is no spatial correlation between galaxies and the Galactic dust, the corresponding surface number densities of the galaxies as a function of $A$: \begin{equation} S(A) \equiv \frac{N_{\mathrm{gal}}(A)}{\Omega(A)} \label{eq:intrinsic-distribution-function} \end{equation} should be independent of $A$ and constant within the statistical error. If the FIR emission from galaxies contaminates the {\em true} extinction, however, the above quantities should depend on the contaminated extinction, $A^{\prime}$, which are defined as $\Omega^{\prime}(A^{\prime})$ and $N^{\prime}_{\mathrm{gal}}(A^{\prime})$, respectively. Thus the {\it observed} surface number densities, $S^{\prime}(A^{\prime})$, should be \begin{equation} S^{\prime}(A^{\prime}) = \frac{N_{\mathrm{gal}}^{\prime}(A^{\prime})} {\Omega^{\prime}(A^{\prime})}. \label{eq:contaminated-surface-number-density} \end{equation} The essence of our analytic model is how to compute the expected $\Omega^{\prime}(A^{\prime})$ and $N_{\mathrm{gal}}^{\prime}(A^{\prime})$ under the presence of the FIR contamination of galaxies, which are distorted from the given {\em true} $\Omega(A)$ and $N_{\rm{gal}}(A)$. Due to its angular resolution, the FIR emission of multiple galaxies contaminate to the extinction in the SFD map at a given position. Thus we need to sum up the FIR emission contribution of those galaxies located within the angular resolution scale: \begin{equation} A^{\prime} = A + \Delta A, \label{eq:A-prime} \end{equation} where the additional extinction, $\Delta A$, is computed by summing up the contribution of the $i$-th galaxies ($i=1\sim N$) located in the pixel: \begin{equation} \Delta A = \sum_{i=1}^N \Delta A_i . \label{eq:DeltaA} \end{equation} In order to perform the summation analytically, we need a joint probability distribution function, $P_{\mathrm{joint}}(\Delta A,N)$, corresponding to the situation where there are $N$ galaxies in a pixel of the dust map, and the total contribution of those galaxies is $\Delta A$. In Appendix \ref{app:analytic-detail}, we present a prescription to compute $P_{\mathrm{joint}}(\Delta A,N)$, and provide the integral expressions for $\Omega^{\prime}(A^{\prime})$ and $N_{\mathrm{gal}}^{\prime}(A^{\prime})$. \subsection{Application of the analytic model} \label{subsec:apply} The analytic expressions for $\Omega^{\prime}(A^{\prime})$, $N^{\prime}_{\rm{gal,corr}}(A^{\prime})$ and $N^{\prime}_{\rm{gal,uncorr}}(A^{\prime})$ are given in equations (\ref{eq:Omega^prime}), (\ref{eq:number-after}) and (\ref{eq:number-before}) in Appendix \ref{app:analytic-detail}. Thus one can compute the surface number densities for the $i$-th subregion of the extinction between $A_i^{\prime}$ and $A_{i+1}^{\prime}$ as \begin{eqnarray} S^{\prime}_{\rm{corr},i} &=& \frac{\int_{A_i^{\prime}}^{A_{i+1}^{\prime}} N^{\prime}_{\rm{gal,corr}}(A^{\prime})dA^{\prime}} {\int^{A_{i+1}^{\prime}}_{A_i^{\prime}} \Omega^{\prime}(A^{\prime})dA^{\prime}}, \label{eq:surface-number-density-after}\\ S^{\prime}_{\rm{uncorr},i} &=& \frac{\int_{A_i^{\prime}}^{A_{i+1}^{\prime}} N^{\prime}_{\rm{gal, uncorr}}(A^{\prime})dA^{\prime}} {\int^{A_{i+1}^{\prime}}_{A_i^{\prime}} \Omega^{\prime}(A^{\prime})dA^{\prime}}, \label{eq:surface-number-density-before} \end{eqnarray} where $S^{\prime}_{\rm{corr}}$ and $S^{\prime}_{\rm{uncorr}}$ are the extinction-corrected and uncorrected surface number densities, respectively. The solid lines in Figure \ref{fig:various-parameters} show the surface number densities calculated from equations (\ref{eq:surface-number-density-after}) and (\ref{eq:surface-number-density-before}) adopting 9 parameter sets of $y_{\rm avg}$ and $y_{\rm rms}$. The horizontal axis, an average extinction in each subregion, is calculated as \begin{eqnarray} A^{\prime}_{\rm{corr},i} &=& \frac{\int_{A_i^{\prime}}^{A_{i+1}^{\prime}} A^{\prime}N^{\prime}_{\rm{gal,corr}}(A^{\prime})dA^{\prime}} {\int^{A_{i+1}^{\prime}}_{A_i^{\prime}} N^{\prime}_{\rm{gal,corr}}(A^{\prime})dA^{\prime}}, \\ A^{\prime}_{\rm{uncorr},i} &=& \frac{\int_{A_i^{\prime}}^{A_{i+1}^{\prime}} A^{\prime}N^{\prime}_{\rm{gal,uncorr}}(A^{\prime})dA^{\prime}} {\int^{A_{i+1}^{\prime}}_{A_i^{\prime}} N^{\prime}_{\rm{gal,uncorr}}(A^{\prime})dA^{\prime}}. \label{eq:average-extinction} \end{eqnarray} Figure \ref{fig:various-parameters} clearly indicates that the analytic predictions and the simulation results are in good agreement. Strictly speaking, the agreement is not perfect in a sense that the reduced $\chi^2$ is as large as $\sim$ 3.5 for the worst cases, when only the Poisson noise is considered. The statistical errors for the observed SDSS surface number densities (Figure \ref{fig:photometric-galaxy}), however, includes the variance due to spatial clustering and are larger than the Poisson noise by an order of magnitude. Thus the discrepancy between the mock simulation and the analytic model is negligible for the parameter-fit analysis to the observational result in the following section. \section{Comparison of FIR contamination with the observed anomaly}\label{sec:comparison} Given the success of the analytic model described above, we compare the model prediction with the observed SFD anomaly. Our discussion in this section is organized as follows. (1) We attempt to find the optimal values of $y_{\rm avg}$ and $y_{\rm rms}$ by fitting the analytic model prediction to the observed anomaly. It turns out that the observed anomaly is reproduced fairly well with a relatively wide range of $y_{\rm avg}$ and $y_{\rm rms}$ as long as $y_{\rm avg}$ is larger than $\sim 4$. (2) This value of $y_{\rm avg}$ should be compared with with the empirical, and thus model-independent, result $y_{\rm avg} \approx 3.8$ obtained from the stacking analysis \citep{Kashiwagi;2013}. The fact that the rough agreement of the two independent estimates for the average FIR to r-band fluxes is interpreted as a supporting evidence for our FIR explanation of the observed SFD anomaly. (3) Finally, we attempt to reproduce the FIR flux of SDSS galaxies required above within our framework of the simplified modeling for FIR-to-optical relation. The estimated FIR flux qualitatively explains the result (2), but not quantitatively. We suspect that this is due to the limitation of our FIR assignment model for galaxies, and not the basic flaw of the FIR explanation for the SFD anomaly. Namely, given the fact that the stacking analysis already indicates the barely required value for $y_{\rm avg}$, we have to refine the FIR assignment model for SDSS galaxies, rather than to rule out the FIR explanation itself. \subsection{Estimating of the FIR emission of galaxies from the observed anomaly \label{subsec:fitting}} Given the success of the analytic model described above, we attempt to find the best-fit parameters, $y_{\rm avg}$, and $y_{\mathrm{rms}}$, to the observed anomaly by minimizing \begin{equation} \label{eq:chi-square} \chi^2(y_{\rm avg},y_{\mathrm{rms}},\bar{N}) = \sum_i \frac{(S^{\rm{obs}}_{\rm{uncorr},i} -S^{\prime}_{\rm{uncorr},i})^2}{\sigma_{\rm{obs},i}^2}, \end{equation} where $S^{\rm{obs}}_{\rm{uncorr},i}$ is the extinction-{\it uncorrected} surface number densities in the $i$-th subregion of extinction, $\sigma_{\mathrm{obs},i}$ is its statistical errors, and $S^{\prime}_{\rm{uncorr},i}=S^{\prime}_{\rm{uncorr},i} (y_{\rm avg},y_{\mathrm{rms}},\bar{N})$ is the analytic model prediction given by equation (\ref{eq:surface-number-density-before}). In the present fit, we use the extinction-uncorrected surface number densities, but the result is almost the same even if we use $S_{\rm{corr}}$ instead. In addition to $y_{\rm avg}$ and $y_{\rm rms}$, we include another free parameter, the intrinsic average number of galaxy in a pixel, $\bar{N}$, which is also unknown since the extinction correction is not necessarily reliable. It turns out that $\bar{N}$ is in the range of $480$ to $500 \rm{[deg^{-2}]}$ and the results below is not sensitive to this value. In reality, however, the resulting constraints are not so strong as shown in the top-left panel in Figure \ref{fig:figure10}. This is partly due to the fact that we simply compute $\sigma_{\mathrm{obs},i}$ from the variance of each extinction bin, which does not represent the proper error. Thus our analysis here should be interpreted as a qualitative attempt to find a possible parameter space to explain the anomaly in terms of the FIR contamination; it would be quite difficult to make more quantitative analysis, given several crude approximations in our theoretical modeling and the poor angular-resolution and uncertain dust temperature correction in the SFD map. Bearing this remark in mind, let us consider the constraints on $y_{\rm avg}$ -- $y_{\rm{rms}}$ plane from the observed anomaly shown in the top-left panel of Figure \ref{fig:figure10}. Fairly acceptable fits are obtained over the bluish region. Just for illustration, we select two widely separated points A and B with $(y_{\rm avg}, y_{\rm rms})=(30, 8000)$ and $(3.8, 4.0)$, respectively, and plot the corresponding analytical predictions in the other three panels. Even though their $y_{\rm avg}$ is different by an order of magnitude, the two sets of parameters account for the observed anomaly reasonably and equally well. \begin{figure*} \begin{center} \includegraphics[width=0.348\textwidth]{fig10a.eps} \hspace{5mm} \includegraphics[width=0.355\textwidth]{fig10d.eps} \bigskip \includegraphics[width=0.36\textwidth]{fig10b.eps} \hspace{2.5mm} \includegraphics[width=0.36\textwidth]{fig10c.eps} \end{center} \figcaption{Fit to the observed anomaly using the analytical model. {\em top left panel}; constraints on $y_{\rm avg}$ and $y_{\mathrm{rms}}$ through the chi-squared analysis with equation (\ref{eq:chi-square}). The black dashed curves correspond to $\chi^2/{\rm d.o.f}=1$ and $\chi^2/{\rm d.o.f}=0.5$ constraints. The orange (A) and magenta (B) crosses are representative values that best explain the observed anomaly. The black dotted line and cross (C) indicates the value of $y_{\rm avg}$ estimated by stacking analysis \citep{Kashiwagi;2013}. The blue cross shows the best-fit parameters for single galaxy of entire SDSS sample estimated in \S\ref{subsec:IRAS_SDSS}. The cyan dot-dashed line and cross (D) also indicates the value of $y_{\rm avg}$ estimated for entire SDSS sample, but including neighbor galaxies contribution (\S\ref{subsec:clustering}). {\em top right panel}; the analytic model predictions plotted over the observational data. The solid lines indicate the analytic prediction by equation (\ref{eq:surface-number-density-after}) and (\ref{eq:surface-number-density-before}), adopting the values of $(y_{\rm avg}, y_{\rm rms})$ shown as the crosses in {\em top left}. The symbols are the observational results for the SDSS galaxies in $r$-band, the same as Figure \ref{fig:photometric-galaxy}. The plots for $S_{\rm gal}$ corrected with $A_{r,{\rm SFD}}$ are shifted by $+20{\rm deg}^{-2}$ just for clarity. {\em bottom left}; the same as {\em top right}, but indicates $S_{\rm gal}$ uncorrected for extinction and the horizontal axis is log-scaled. {\em bottom left}; the same as {\em bottom left}, but for $S_{\rm gal}$ corrected with $A_{r,{\rm SFD}}$. \label{fig:figure10}} \end{figure*} \subsection{Comparison with the stacking image analysis} \label{subsec:comparison-stacking} We have shown that the anomaly in the surface number densities of SDSS galaxies on the SFD map is well reproduced by assuming their 100$\ensuremath{\mu \mathrm{m}}$ to $r$-band flux ratio is $\sim 3.8$ on average, where the 100$\ensuremath{\mu \mathrm{m}}$ flux includes the contribution of neighbor galaxies. On the other hand, the flux ratio of a single galaxy is estimated as $\sim 0.5$ (see \S \ref{subsec:IRAS_SDSS}). Indeed these values should be compared with the result of the stacking image analysis by \citet{Kashiwagi;2013}. They stacked the SDSS galaxies on the SFD map and found that a galaxy of $r$-band magnitude $m_r$ contributes to the extinction on average by \begin{equation} \label{eq:dA-mr-gal-single} \Delta A_r^{\rm s}(m_r) = 0.087 \times 10^{0.41(18-m_r)}~{\rm [m mag]}, \end{equation} by itself (single term), and \begin{equation} \label{eq:dA-mr-gal-total} \Delta A_r^{\rm tot}(m_r) = 0.64 \times 10^{0.17(18-m_r)}~{\rm [m mag]}, \end{equation} including the contribution from neighbor galaxies, corresponding to the clustering term in \citet{Kashiwagi;2013}. The above extinction due to the $100\ensuremath{\mu \mathrm{m}}$ emission from galaxies is translated into its $100\ensuremath{\mu \mathrm{m}}$ to $r$-band flux ratio as \begin{equation} \label{eq:dA-y-avg} y= \frac{2\pi \sigma^2}{f_r \nu_r / \nu_{\rm 100\ensuremath{\mu \mathrm{m}}}} \frac{\Delta A_r}{k_r p}, \end{equation} where $\sigma$ is the Gaussian PSF width and $f_r$ is the $r$-band flux. Thus integrated over the differential number density, equations (\ref{eq:dA-mr-gal-single}) and (\ref{eq:dA-mr-gal-total}) suggest that \begin{eqnarray} \label{eq:y-avg-single} \bar{y}_{\rm avg}^{\rm s} = \frac{\int dm_r\frac{dN}{dm_r}y_{\rm avg}^{\rm s}(m_r)} {\int dm_r\frac{dN}{dm_r}} = 0.239, \end{eqnarray} and \begin{eqnarray} \label{eq:y-avg-clustering} \bar{y}_{\rm avg}^{\rm tot} = \frac{\int dm_r ({dN}/{dm_r}) y_{\rm avg}^{\rm c}(m_r)} {\int dm_r ({dN}/{dm_r})} = 2.77, \end{eqnarray} respectively. These values are based on the direct measurement of the FIR contamination, and thus independent of the modeling of 100$\ensuremath{\mu \mathrm{m}}$ to optical relation. We also emphasis that they should automatically include possible contributions from those galaxies not identified by SDSS. Therefore the sum of the two terms can be reliably interpreted as the expected contribution of the SDSS galaxies to $y_{\rm avg}$ including neighbor galaxies, which is plotted in Figure \ref{fig:figure10}. While we do not know the corresponding $y_{\rm rms}$, we have already found that the dependence of the anomaly on $y_{\rm rms}$ is rather weak, at least in our analytic model. Thus the empirical value of $y_{\rm avg}$ from the stacking analysis roughly explains the observed anomaly as plotted in the three panels of Figure \ref{fig:figure10}. We interpret this agreement as a supporting evidence for the FIR model of the SFD anomaly given the fact that we assume a very simple relation between 100$\ensuremath{\mu \mathrm{m}}$ and optical luminosities, neglecting the galaxy morphology dependence that certainly leads to the FIR flux difference. \subsection{Estimates of clustering contribution of SDSS galaxies} \label{subsec:clustering} We tried to independently estimate $y_{\rm avg}$, including an additional contribution of neighbor galaxies, using the SDSS galaxy distribution over the SFD map, instead of the stacking result by \citet{Kashiwagi;2013} discussed in \S \ref{subsec:comparison-stacking}. We first randomly assign the FIR flux of SDSS galaxies assuming $(y_{\rm avg}, y_{\rm rms})=(0.5, 1.0)$ {\it for each SDSS galaxy itself neglecting the clustering term}. Second, we sum up the FIR fluxes of galaxies convolved with the PSF of the SFD map (the Gaussian width of $3'.1$) centered at each galaxy. Finally we compute $y_{\rm avg}$ and $y_{\rm rms}$ using the summed FIR fluxes after subtracting the average background flux. Note that the resulting values of $y_{\rm avg}$ and $y_{\rm rms}$ should be diffrent from the above input values because of the contribution of the clustering term. We find $y_{\rm avg} \approx 2$, but $y_{\rm rms}$ is not well determined because it turned out to be very sensitive to the choice of the background flux. This result indicates that the FIR flux of the SDSS galaxies explains only a half of those required to well reproduce the observed anomaly, $y_{\rm avg}=3.8$. Indeed, employing $y_{\rm avg} \approx 2$, our model still reproduces the anomaly qualitatively, but the predicted feature is substantially weaker than that of the observed one. The assigned FIR flux in this model, however, is based on the single galaxy contribution estimated in \S \ref{subsec:IRAS_SDSS} ($y_{\rm avg}=0.5$), thus would be sensitive to the FIR assignment model. Given the fact that the empirical value from the stacking analysis, which is independent of such models, is fairly successful in reproducing the anomaly, we suspect that the factor of two difference originates from the limitation of our crude modeling for FIR flux, instead of the basic flaw of the FIR explanation of the anomaly. \section{Discussion}\label{sec:discussion} \subsection{Effects of spatial clustering of galaxies} \label{subsec:N-body} Both the mock simulations and the analytic model discussed in the previous section completely ignore the spatial clustering of galaxies. We, therefore, examine the clustering effect on the anomaly in this subsection. The most straightforward method is to replace the Poisson distributed mock galaxies by dark matter particles from cosmological N-body simulation. For that purpose, we use a realization in the standard $\Lambda\rm{CDM}$ cosmology with $\sigma_8=0.76$ performed by \citet{Nishimichi;2009}. We repeat similar mock observations as discussed in \S \ref{subsec:poisson}, except for that we assign $r$-band luminosity to each mock galaxy instead of their apparent magnitude. To be specific, (i) we randomly assign $r$-band luminosities to all N-body dark matter particles according to the luminosity function of equation (\ref{eq:LFopt}), (ii) convert their luminosities to apparent $r$-band magnitudes observed from a fixed observer position, and (iii) randomly select a fraction of the mock galaxies to match with the SDSS observed $dN/dm_r$ (Figure \ref{fig:magnitude-distribution}). We repeat the same fitting analysis as Figure \ref{fig:figure10}, except that the data are now replaced by the mock result on the basis of the cosmological N-body simulation with $y_{\rm avg}=3.8$ and $y_{\rm rms}=4.75$. The mock observation including the galaxy clustering effect result shows stronger anomaly than Poisson mock simulation with the identical $y_{\rm avg}$ and $y_{\rm rms}$. The analytic model that neglects the spatial clustering still reproduces the simulated anomaly very well, but the best-fit $y_{\rm avg}$ overestimate the real values employed in the simulation by a factor of $\sim$2. Thus the clustering effect can be absorbed effectively by re-interpreting the best-fit values of $y_{\rm avg}$ appropriately. The clustering effect estimated here is largely consistent with the clustering term contribution estimated directly from the SDSS galaxies (\S \ref{subsec:clustering}). In order to quantitatively understand the relation between this bias and the strength of the galaxy spatial clustering, we have to incorporate the effect of spatial clustering in our analytic model. For that purpose, we measure the PDF of the number of the N-body mock particles in a pixel and replace the Poisson distribution in equation (\ref{eq:P_joint}) with the measured one. The analytic model prediction, however, hardly changes by such a modification. Thus more sophisticated improvements seem to be needed to account for the spatial clustering effect, which is beyond the scope of this paper. \subsection{Limitation of the correction for the FIR emission of galaxies} We attempt to correct the SFD map by subtracting the average FIR contamination of SDSS galaxies. The corrected extinction at an angular position $\mathbf\theta$ in the Galactic map is computed as \begin{equation} \label{eq:dA-mr-pix} A_{r,{\rm corrected}}({\mathbf\theta}) = A_{r,\rm SFD}({\mathbf\theta}) - \sum_{j} \Delta A({\mathbf\theta_j - \mathbf\theta};m_r^j), \end{equation} where ${\mathbf\theta_j}$ is the position of the $j$-th galaxy with its $r$-band magnitude of $m_r^j$. We employ 4 different values for $\Delta A$ given the uncertainty of the interpretation of the best-fit value of $y_{\rm avg}$ discussed before. As shown in Figure \ref{fig:sgal-after-contamination-correction}, however, the above correction does not seem to remove the anomaly so well. This results may imply that the dependence of FIR properties on galaxy population, which is neglected in our modeling, is essential for accurate correction for the FIR contamination. As a future work, such a morphology dependence of FIR luminosities of SDSS galaxies will be investigated by stacking analysis, especially using recent high resolution diffuse FIR measurements by AKARI \citep{Murakami;2007}, WISE \citep{Wright;2010}, etc. \begin{figure*} \begin{center} \includegraphics[width=0.35\textwidth]{fig11a.eps} \hspace{20pt} \includegraphics[width=0.35\textwidth]{fig11b.eps} \bigskip \includegraphics[width=0.35\textwidth]{fig11c.eps} \hspace{20pt} \includegraphics[width=0.35\textwidth]{fig11d.eps} \end{center} \figcaption{Surface number densities of the SDSS galaxies with $17.5<m_{\rm r}<19.4$ after subtracting their average FIR emission contamination, where $y_{\rm avg} = 0.3, 1.0, 2.0, 3.8$ are adopted for estimation of the FIR emission of the SDSS galaxies. \label{fig:sgal-after-contamination-correction}} \end{figure*} \subsection{Testing the Peek and Graves correction map} \label{subsec:PG} In \S \ref{subsec:comparison-stacking}, we found that the observed anomaly of the SDSS galaxies is roughly explained by the contamination of galaxy FIR emission. Nevertheless, the observed and predicted surface number densities (Fig \ref{fig:figure10}) do not match perfectly, which might be attributed to other possible systematics in the SFD map. In order to check the possible systematic effect, we use the improved extinction map by \citet[hereafter PG]{Peek;2010}. They found that the SFD map {\it under-predicts} extinction up to $\sim 0.1$ mag in $r$-band, using the passively evolving galaxies as standard color indicators. Their method is complementary to our galaxy number count analysis in a sense that they directly measure the reddening by the Galactic dust. Since the resolution of the PG correction map to SFD is $4^{\circ}.5$, the FIR fluctuations due to the emission of galaxies are not expected to be removed. The PG correction map, however, may have removed other systematics than the FIR contamination, which are not considered in our analytic model at all. To see if their correction affect the number count analysis and the anomaly in the original SFD map, we repeat the same analysis described in \S 6 using the PG map. Basically, we find a very similar correlation between $S_{\rm gal}$ and $A_{r,\rm{PG}}$, suggesting that the PG map still suffers from the FIR contamination of galaxies as expected. We note, however, that our analytic model prediction exhibits slightly better agreement for the PG map than for the SFD map. This may indicates that possible systematic errors in the SFD map other than the FIR contamination is at least partially removed in the PG map. \section{Summary and conclusions} \label{sec:conclusions} In the present paper, we have revisited the origin of the anomaly of surface number density of SDSS galaxies with respect to the Galactic extinction, originally pointed out by \citet{Yahata;2007}. We first computed the anomaly using the SDSS DR7 photometric catalogs, and then developed both numerical and analytic models to explain the anomaly. We take account of the contamination of galaxies in the IRAS $100{\ensuremath{\mu \mathrm{m}}}$ flux that was assumed to come entirely from the Galactic dust. Our main findings are summarized as follows. \begin{itemize} \item Both numerical simulations and analytic model reproduce the observed anomaly quite well. Thus we quantitatively confirmed the validity of the hypothesis that the observed anomaly in the SFD Galactic extinction map is mainly due to the FIR emission from galaxies, originally proposed by \citet{Yahata;2007}. \item The comparison of the analytic model and the observed anomaly constrains mainly the average $100\ensuremath{\mu \mathrm{m}}$ to optical flux ratio for SDSS galaxies. The resulting value is in a reasonable agreement with that obtained from the stacking image analysis of the SDSS galaxies by \citet{Kashiwagi;2013}. \item We also independently estimated the FIR contribution of single SDSS galaxy based on IRAS/SDSS overlapped catalogue data assuming a simple relation between FIR and optical luminosities. Summing up such FIR flux according to the SDSS galaxy distribution, however, we find that those contribution only explains roughly half of that required to reproduce the observed anomaly. This result may be due to the limitation of our modeling of the FIR to optical relation. \end{itemize} While our current analytic model still needs to be improved, the fact that the empirically determined value of $y_{\rm avg}$ nicely reproduces the observed anomaly indicates that the FIR emission of SDSS galaxies is the major origin of the anomaly. In particular, we note that subtracting the average FIR contamination of the SDSS galaxies from the SFD extinction map does not properly remove the observed anomaly. This may imply that it is essential to consider the dependence of FIR emission on galaxy morphology and/or the effect of galaxy clustering, both of which we have neglected in the current analytical model. Since morphology and spatial clustering of galaxies are correlated in a complicated fashion, it is not easy to identify the good strategy of the correction method. We are currently working along this direction with the AKARI all-sky map data in $60, 90, 140, 160\ensuremath{\mu \mathrm{m}}$. The stacking image analysis of SDSS galaxies with the higher-angular resolution map in multi-frequency bands would enable us to estimate the FIR emission of galaxies as a function of their properties including their color and morphology (T.Okabe et al. 2015, in preparation). The FIR contamination that explains the anomalous behavior in the surface number density of the SDSS galaxies is just statistical and tiny, on the order of (0.1$\sim$1)mmag of extinction in $r$-band, which is much less serious than naively expected from the anomaly. Nevertheless the galaxy FIR emission is correlated with the large scale structure of the universe. Thus it may systematically bias the cosmological analysis. The present methodology is in principle applicable to check the reliability, and even to improve the accuracy of the future Galactic extinction map that should play a key role in all astronomical observations, in particular for the purpose of precision cosmology. \acknowledgements We thank Brice M{\'e}nard, Tsunehito Kohyama, Yasunori Hibi, and Hiroshi Shibai for useful discussions. T.K and Y.S are grateful to the hospitality of Department of Astrophysical Sciences, Princeton University, where most of the present work was performed. We also thank an anonymous referee for several constructive comments and in particular for suggesting to compute the expected FIR fluxes using the SDSS galaxy distribution as discussed in \S 6.3. T.K. is supported by a Global COE Program "the Physical Sciences Frontier", MEXT, Japan. T.N. is supported by a Grant-in-Aid for the JSPS fellows. Y.S. gratefully acknowledges the supports from the Global Collaborative Research Fund ``Worldwide Investigation of Other Worlds'' grant, the Global Scholars Program of Princeton University, and the Grant-in Aid for Scientific Research by JSPS (No. 24340035). A.T. acknowledges the support from Grant-in-Aid for Scientific Research by JSPS (No. 24540257). Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,325
September 5, 2018 | 1 Comment Passenger Profile: Chris Passengers like Chris depend on Metro Transit to get to work, school and other important destinations. We hope you enjoy his story, and we're interested in your transit story too. Email mrhibbard@bistatedev.org, and we may share your story in a future "Passenger Profile." When Chris moved to St. Louis in 1999, he was eager to find a place to call home. He chose an apartment in the Central West End, and that decision led him to a discovery just down the road that would end up changing his lifestyle completely. Chris discovered Metro Transit. From 1999 to this day, Chris uses MetroLink to get to and from his job as a librarian at the University of Missouri – St. Louis. But recently, he shifted his transit commute ever so slightly. Instead of boarding a Red Line MetroLink train at the Central West End MetroLink Station, he boards the train at the new Cortex MetroLink Station. "I love this station," Chris said. "It has shaved off my walking time to the MetroLink station by about seven to eight minutes. It's more convenient for me. Being able to use MetroLink every day makes living in the city much more pleasant. I don't have to deal with traffic. I really don't drive my car that much." While Chris uses MetroLink primarily to get to and from work each day, over the years he's expanded his transit use to accomplish other tasks as well. For example, he frequently visits the Dierbergs near the Brentwood I-64 MetroLink Station and the Culinaria near the 8th & Pine MetroLink Station in downtown St. Louis to handle his weekly grocery runs. "I probably could live without a car and get by," he said. Metro Lifestyle 1 thought on "Passenger Profile: Chris" Pingback: Media coverage: September 2018 - Education News
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,783
{"url":"https:\/\/solvedlib.com\/state-true-false-or-uncertain-and-then-defend,401550","text":"###### Question:\n\n10 points) The only way to have continual improvements in the material economic well-being of a society in the long run is to increase labor productivity.\n\n#### Similar Solved Questions\n\n##### Which of the following shows independent events?Select the correct answer below:rolling a 3 with a standard die and then rolling a sum greater than 4 when counting the next rolldrawingand then drawing without replacement from standard deck of cardsrolling sum of 6 from the first two rolls of . standard dle and sum of 4 from the second two rollsdrawing face card and then drawing xwichout replacement From standard deck of cardsPrevious\nWhich of the following shows independent events? Select the correct answer below: rolling a 3 with a standard die and then rolling a sum greater than 4 when counting the next roll drawing and then drawing without replacement from standard deck of cards rolling sum of 6 from the first two rolls of ....\n##### In a particular manufacturing process, the useful life of a cutting tool is linearly related to...\nIn a particular manufacturing process, the useful life of a cutting tool is linearly related to the speed at which the tool is operated. The data in the accompanying table were derived from life tests for speed? the two different brands of cutting tools currently used in the production process. For ...\n##### MATIZSILIA Dc ielle erntn e,ba ruincCoreA Abrnnr(aLa ltLanou(enattlFzad UheWevc ItfirrakwiUlw Uven Wustina\nMATIZSILIA Dc ielle erntn e,ba ruinc CoreA Abrnnr(aLa ltLanou (enattl Fzad Uhe Wevc Itfirrakwi Ulw Uven Wustina...\n##### What is the new expected pH level for each question. 20 ml of sodium hydroxide solution...\nWhat is the new expected pH level for each question. 20 ml of sodium hydroxide solution added 25 ml of sodium hydroxide solution added Bonus\/ Extra credit: Determine the pH of the resulting solution that is formed when 20.0 ml of 0.05 Molar NaOH is added to 10.0 ml 0.20 molar acetic acid\/sodium...\n##### The area of a typical eardrum is about 5.1 \u00d7 10?5 m2 a) Find the sound...\nThe area of a typical eardrum is about 5.1 \u00d7 10?5 m2 a) Find the sound power (the energy per second) incident on an eardrum at the threshold of hearing 1 \u00d7 10?12 W\/m2 . Answer in units of W. b) Find the sound power incident on an eardrum at the threshold of pain 1 W\/m2 . Answer in units ...\n##### Find the area of the shaded region of the figure. Use 3.14 for it as necessary....\nFind the area of the shaded region of the figure. Use 3.14 for it as necessary. 3 cm OA 10.26 6 cm? OB. 19.26 cm? O C. 47.52 cm? OD. 9.63 cm?...\n##### Two point-like charges are abeled X and Y. Here is the information that we have about them: Charge of X: 5.38 pC Mass of X: 3.49Charge of Y: 6.55 HC Mass of Y: 7.52 The charges are initially separateddistance 0.=If the charges are released from rest; they repe from each other After long time has passed the charges are at \"infinity: What is the speed of X? 5.78 ms(b) What is the speed of Y? mfs\nTwo point-like charges are abeled X and Y. Here is the information that we have about them: Charge of X: 5.38 pC Mass of X: 3.49 Charge of Y: 6.55 HC Mass of Y: 7.52 The charges are initially separated distance 0.= If the charges are released from rest; they repe from each other After long time has ...\n##### Complete the following table. (Enter all mass and volume data to the closest 0.01 unit. Enter...\nComplete the following table. (Enter all mass and volume data to the closest 0.01 unit. Enter the concentration of NaOH to 3 significant figures.) i've got the data down for table one but had no idea how to solve table two, please help Part A: Titration of vinegar HC2H302(aq) + OH-(aq) \u2192 C2...\n##### What will the graph of this equation look like, and why?3y - 2x2 + 5 = 4x - 2x2 (A)The graph will be a single point because there is no y2 in the equation_ (B) The graph will be a straight line because all squared terms will e cancel out (C) The graph will be a hyperbola because the squared terms are negative (D)The graph does not exist because no variables remain after you combine like terms:\nWhat will the graph of this equation look like, and why? 3y - 2x2 + 5 = 4x - 2x2 (A)The graph will be a single point because there is no y2 in the equation_ (B) The graph will be a straight line because all squared terms will e cancel out (C) The graph will be a hyperbola because the squared terms a...\n##### Problem G: Pb(IO:J2 is slightly soluble in water: The molar solubility (s) for this compound is 4.0 X1O 5.Calculate the following: Write a balanced equation for the dissociation of Pb(lO3)2 .b. Write an expression for Ksp.C. Determine the value of Ksp and the concentrations of Pb+2 and [O3\" at equilibriumd. Calculate the molar solubility (s) of Pb(IO3)zin a solution of 0.150M NalO:Calculate the molar solubility (s) of Pb(IO3)2 in a solution of 0.150M Pb(NO:)z\nProblem G: Pb(IO:J2 is slightly soluble in water: The molar solubility (s) for this compound is 4.0 X1O 5.Calculate the following: Write a balanced equation for the dissociation of Pb(lO3)2 . b. Write an expression for Ksp. C. Determine the value of Ksp and the concentrations of Pb+2 and [O3\" ...\n##### A 50.0 mL sample containing Cd?- and Mn2- was treated with 52.5 mL of 0.0700 M EDTA. Titration of the excess unreacted EDTA required 17.0 mL of 0.0250 M Ca?- . The Cd + was displaced from EDTA by the addition of an excess of CN- Titration of the newly freed EDTA required 11.9 mL of 0.0250 M Ca2- What are the concentrations of Cd2+ and Mn?+ in the original solution?concentration:M Mn?-concentration:M Cd2-\nA 50.0 mL sample containing Cd?- and Mn2- was treated with 52.5 mL of 0.0700 M EDTA. Titration of the excess unreacted EDTA required 17.0 mL of 0.0250 M Ca?- . The Cd + was displaced from EDTA by the addition of an excess of CN- Titration of the newly freed EDTA required 11.9 mL of 0.0250 M Ca2- Wha...\n##### Oriole Company is constructing a building. Construction began on February 1 and was completed on December...\nOriole Company is constructing a building. Construction began on February 1 and was completed on December 31. Expenditures were $1,884,000 on March 1,$1,284,000 on June 1, and $3,049,820 on December 31. Oriole Company borrowed$1,038,290 on March 1 on a 5-year, 13% note to help finance construction...\n##### 7 _ Use Kuratowski's theorem to prove that the following graph is not planar_\n7 _ Use Kuratowski's theorem to prove that the following graph is not planar_...\n##### Pt) Consider the function f: R (-co, -6) defined by fu) 3 -9 \u0435xp(\u0417\u0445 + 8)...\npt) Consider the function f: R (-co, -6) defined by fu) 3 -9 \u0435xp(\u0417\u0445 + 8) \u2014 6. The inverse of f is given by the formula: f(x)=...\n##### C++ assignment help! The instructions are below, i included the main driver, i just need help...\nC++ assignment help! The instructions are below, i included the main driver, i just need help with calling the functions in the main function This assignment will access your skills using C++ strings and dynamic arrays. After completing this assignment you will be able to do the following: (1) alloc...\n##### Can someone please exaplain 7\/8 thank you! Instructor Dale F0O.10 Table, Mass of Dynamics Cart Rubber...\nCan someone please exaplain 7\/8 thank you! Instructor Dale F0O.10 Table, Mass of Dynamics Cart Rubber Bumper Thick Spring Thin Spring O.687 O.341 -0.591 -0.296 .631 Vi (mS 0.703 0.352 Vr (ms)-6.340 -O.17 0.691 O.346 -0492 -0. 246 O592 6.634N.S (434s 1510S mv mVf 0.522 0.535 N.S Ap larea S\u096f...\n##### Qutzstion 11Queston TTOntspoints\"We cross two pure lines of plants, find the following result: Orange one with yellow petals and one with red 182 Yellow The F1 are all Jull hypothesis of Red 77 (Total orange 320). What When the F1 is selfed to give an FZ,we incomplete dominance (expected ratio Orangexellovred would be the expected number of 2:1:1)\" Orang? assumingMoving to the next question prevents changes to this answer_Question 11\nQutzstion 11 Queston TTOnts points \"We cross two pure lines of plants, find the following result: Orange one with yellow petals and one with red 182 Yellow The F1 are all Jull hypothesis of Red 77 (Total orange 320). What When the F1 is selfed to give an FZ,we incomplete dominance (expected rat...\n##### 16. [-\/3 Points]DETAILSMARSVECTORCALC6 2.6.008Find the planes tangent to the following surfaces at the indicated points_x2 + 2y2 9xz = 12, at the point ( 1, 2, 3)y2 _ x2 = 8, at the point (1, 3, 7)XyZ 1, at the point (1, 1, 1)\n16. [-\/3 Points] DETAILS MARSVECTORCALC6 2.6.008 Find the planes tangent to the following surfaces at the indicated points_ x2 + 2y2 9xz = 12, at the point ( 1, 2, 3) y2 _ x2 = 8, at the point (1, 3, 7) XyZ 1, at the point (1, 1, 1)...\n##### A chemist dissolves 188.mg of pure nitric acid in enough waterto make up 280.mL of solution. Calculate the pH of the solution.Round your answer to 3 significant decimal places\nA chemist dissolves 188.mg of pure nitric acid in enough water to make up 280.mL of solution. Calculate the pH of the solution. Round your answer to 3 significant decimal places...\n##### [-13 Find Find Polnts] the H gravitationa the perod of Its mls DETAILS force Vi revolution acting aspecdt 1 about Earth height above Earth equal MY NOTES Earth'5 mean radius PRACTICE ANOTHER\n[-13 Find Find Polnts] the H gravitationa the perod of Its mls DETAILS force Vi revolution acting aspecdt 1 about Earth height above Earth equal MY NOTES Earth'5 mean radius PRACTICE ANOTHER...\n##### 10.2.2 The following data are from study by Linus Pauling (1971) The significance of the evidence about ascorbic acid and the common cold \" Proceedings of the National Academy of Sciences; Vol: 2678), concerned with examining the relationship between taking vitamin C and the incidence of colds_ Of 279 participants in the study; 140 received placebo (sugar pill) and 139 received vitaminNo Cold Cold Placebo 109 Vitamin \u00e2\u201a\u00ac 122Assess the null hypothesis that there is no relationship between ta\n10.2.2 The following data are from study by Linus Pauling (1971) The significance of the evidence about ascorbic acid and the common cold \" Proceedings of the National Academy of Sciences; Vol: 2678), concerned with examining the relationship between taking vitamin C and the incidence of colds_...\n##### 9) Ifyou react 123.1 g of CuClz with an excess of K3PO4, how much KCl can you form? 68.26 grams Typo on exam, this is the correct value b. 183.1 grams 136.5 grams d. 246.2 grams\n9) Ifyou react 123.1 g of CuClz with an excess of K3PO4, how much KCl can you form? 68.26 grams Typo on exam, this is the correct value b. 183.1 grams 136.5 grams d. 246.2 grams...","date":"2022-05-23 23:19:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.43852099776268005, \"perplexity\": 3893.227659112963}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662562106.58\/warc\/CC-MAIN-20220523224456-20220524014456-00644.warc.gz\"}"}
null
null
\section{Introduction} The mathematical modelling of intra-cellular biological processes has been using nonlinear ordinary differential equations since the early ages of mathematical biophysics in the 1940s and 50s \cite{rashevsky1960mathematical}. A standard modelling choice for cellular circuitry is to use chemical reactions with mass action law kinetics, leading to polynomial differential equations. Rational functions kinetics (for instance the Michaelis-Menten kinetics) can generally be decomposed into several mass action steps. An important property of biological systems is their multistationarity which means having multiple stable steady states. Multistationarity is instrumental to cellular memory and cell differentiation during development or regeneration of multicellular organisms and is also used by micro-organisms in survival strategies. It is thus important to determine the parameter values for which a biochemical model is multistationary. With mass action reactions, testing for multiple steady states boils down to counting real positive solutions of algebraic systems. \newpage The models benchmarked in this paper concern intracellular signaling pathways. These pathways transmit information about the cell environment by inducing cascades of protein modifications (phosphorylation) all the way from the plasma membrane via the cytosol to genes in the cell nucleus. Multistationarity of signaling usually occurs as a result of activation of upstream signaling proteins by downstream components \cite{BhallaLyengar99}. A different mechanism for producing multistationarity in signaling pathways was proposed by Kholodenko \cite{Markevich2004}. In this mechanism the cause of multistationarity are multiple phosphorylation/ dephosphorylation cycles that share enzymes. A simple, two steps phosphorylation/dephosphorylation cycle is capable of ultrasensitivity, a form of all or nothing response with no multiple steady states (Goldbeter--Koshland mechanism). In multiple phosphorylation/dephosphorylation cycles, enzyme sharing provides competitive interactions and positive feedback that ultimately leads to multistationarity \cite{Markevich2004,legewie2007competing}. Our study is complementary to works applying numerical methods to ordinary differential equations models used for biology applications. Gross et al. \cite{gross2016numerical} used polynomial homotopy continuation methods for global parameter estimation of mass action models. Bifurcations and multistationarity of signaling cascades was studied with numerical methods based on the Jacobian matrix \cite{zumsande2010bifurcations}. Other symbolic approaches to multistationarity either propose necessary conditions or work for particular networks \cite{Conradi2008,ConradiMincheva,JoshiShiu2015,PerezMillan2015}. Our work here follows \cite{Bradford2017}, where it was demonstrated that determination of multistationarity of an 11-dimensional model of a mitogen-activated protein kinases (MAPK) cascade can be achieved by currently available symbolic methods when numeric values are known for all but potentially one parameter. We show that the symbolic methods used in \cite{Bradford2017}, viz.~real triangularization and cylindrical algebraic decomposition, and also polynomial homotopy continuation methods, benefit tremendously from a graph theoretical symbolic preprocessing method. This method has been sketched by Grigoriev et al. \cite{Grigoriev2015} and has been used for a ``hand computation,'' but had not been implemented before. For our experiments we use the model already investigated in \cite{Bradford2017} and a higher dimensional model of the MAPK cascade. \section{The Systems for the Case Studies} \label{secMAPK:system} For our investigations we use models of the MAPK cascade that can be found in the Biomodels database\footnote{\url{http://www.ebi.ac.uk/biomodels-main/}} as numbers 26 and 28 \cite{Li2010a}. We refer to those models as Biomod-26\xspace and Biomod-28\xspace, respectively. \subsection{Biomod-26\xspace} \label{SEC:Sys26Def} Biomod-26\xspace, which we have studied also in \cite{Bradford2017}, is given by the following set of differential equations. We have renamed the species names as $x_1, \ldots, x_{11}$ and the rate constants as $k_1, \ldots, k_{16}$ to facilitate reading: \begin{eqnarray} \difft{x}_1 & = & k_{2} x_{6} + k_{15} x_{11} - k_{1} x_{1} x_{4} - k_{16} x_{1} x_{5} \nonumber \\ \difft{x}_2 & = & k_{3} x_{6} + k_{5} x_{7} + k_{10} x_{9} + k_{13} x_{10} - x_{2} x_{5} (k_{11} + k_{12}) - k_{4} x_{2} x_{4}\nonumber \\ \difft{x}_3 & = & k_{6} x_{7} + k_{8} x_{8} - k_{7} x_{3} x_{5}\nonumber \\ \difft{x}_4 & = & x_{6} (k_{2} + k_{3}) + x_{7} (k_{5} + k_{6}) - k_{1} x_{1} x_{4} - k_{4} x_{2} x_{4}\nonumber \\ \difft{x}_5 & = & k_{8} x_{8} + k_{10} x_{9} + k_{13} x_{10} + k_{15} x_{11} - x_{2} x_{5} (k_{11} + k_{12}) - k_{7} x_{3} x_{5} - k_{16} x_{1} x_{5}\nonumber \\ \difft{x}_6 & = & k_{1} x_{1} x_{4} - x_{6} (k_{2} + k_{3})\nonumber \\ \difft{x}_7 & = & k_{4} x_{2} x_{4} - x_{7} (k_{5} + k_{6})\nonumber \\ \difft{x}_8 & = & k_{7} x_{3} x_{5} - x_{8} (k_{8} + k_{9})\nonumber \\ \difft{x}_9 & = & k_{9} x_{8} - k_{10} x_{9} + k_{11} x_{2} x_{5}\nonumber \\ \difft{x}_{10} & = & k_{12} x_{2} x_{5} - x_{10} (k_{13} + k_{14})\nonumber \\ \difft{x}_{11} & = & k_{14} x_{10} - k_{15} x_{11} + k_{16} x_{1} x_{5} \label{EQ:thesystem26} \end{eqnarray} The Biomodels database also gives us meaningful values for the rate constants, which we generally substitute into the corresponding systems for our purposes here: \begin{align} k_{1} &= 0.02,& k_{2} &= 1,& k_{3} &= 0.01,& k_{4} &= 0.032,\nonumber\\ k_{5} &= 1,& k_{6} &= 15,& k_{7} &= 0.045,& k_{8} &= 1,\nonumber\\ k_{9} &= 0.092,& k_{10} &= 1,& k_{11} &= 0.01,& k_{12} &= 0.01,\nonumber\\ k_{13} &= 1,& k_{14} &= 0.5,& k_{15} &= 0.086,& k_{16} &= 0.0011.\label{EQ:rcestimates26} \end{align} Using the left-null space of the stoichiometric matrix under positive conditions as a conservation constraint \cite{Famili2003} we obtain three linear conservation laws: \begin{eqnarray} x_{5} + x_{8} + x_{9} + x_{10} + x_{11} &=& k_{17}, \nonumber \\ x_{4} + x_{6} + x_{7} &=& k_{18},\nonumber \\ x_{1} + x_{2} + x_{3} + x_{6} + x_{7} + x_{8} + x_{9} + x_{10} + x_{11} &=& k_{19}, \label{EQ:claws26} \end{eqnarray} where $k_{17}$, $k_{18}$, $k_{19}$ are new constants computed from the initial data. Those constants are the parameters that we are interested in here. The steady state problem for the MAPK cascade can now be formulated as a real algebraic problem as follows. We replace the left hand sides of all equations in (\ref{EQ:thesystem26}) with $0$ and substitute the values from (\ref{EQ:rcestimates26}). This together with (\ref{EQ:claws26}) yields a system of parametric polynomial equations with polynomials in $\mathbb{Z}[k_{17},k_{18},k_{19}][x_1,\dots,x_{11}]$. Since all entities in our model are strictly positive, we add to our system positivity conditions $k_{17}>0$, $k_{18}>0$, $k_{19}>0$ and $x_1>0$, \dots,~$x_{11}>0$. In terms of first-order logic the conjunction over our equations and inequalities yields a quantifier-free Tarski formula. \subsection{Biomod-28\xspace} The system with number 28 in the Biomodels database is given by the following set of differential equations. Again, we have renamed the species names into $x_1, \ldots, x_{16}$ and the rate constants into $k_1, \ldots, k_{27}$ to facilitate reading: \begin{eqnarray*} \difft{x}_{1} & = & k_2 x_9 + k_8 x_{10} + k_{21} x_{15} + k_{26} x_{16} - k_1 x_1 x_5 - k_7 x_1 x_5 - k_{22} x_1 x_6 - k_{27} x_1 x_6 \nonumber \\ \difft{x}_{2} &= & k_3 x_9 + k_5 x_7 + k_{24} x_{12} - k_4 x_2 x_5 - k_{23} x_2 x_6 \nonumber \\ \difft{x}_{3} & = & k_9 x_{10} + k_{11} x_8 + k_{16} x_{13} + k_{19} x_{14} - k_{10} x_3 x_5 - k_{17} x_3 x_6 - k_{18} x_3 x_6 \nonumber \\ \difft{x}_{4} &=& k_6 x_7 + k_{12} x_8 + k_{14} x_{11} - k_{13} x_4 x_6 \nonumber \\ \difft{x}_{5} &= & k_2 x_9 + k_3 x_9 + k_5 x_7 + k_6 x_7 + k_8 x_{10} + k_9 x_{10} + k_{11} x_8 + k_{12} x_8 -\nonumber \\ & & \quad k_1 x_1 x_5 - k_4 x_2 x_5 - k_7 x_1 x_5 - k_{10} x_3 x_5 \nonumber \\ \difft{x}_{6} & = & k_{14} x_{11} + k_{16} x_{13} + k_{19} x_{14} + k_{21} x_{15} + k_{24} x_{12} + k_{26} x_{16} - \nonumber \\ & & \quad k_{13} x_4 x_6 - k_{17} x_3 x_6 - k_{18} x_3 x_6 - k_{22} x_1 x_6 - k_{23} x_2 x_6 - k_{27} x_1 x_6 \nonumber \\ \difft{x}_{7} & = & k_4 x_2 x_5 - k_6 x_7 - k_5 x_7 \nonumber \\ \difft{x}_{8} &= & k_{10} x_3 x_5 - k_{12} x_8 - k_{11} x_8 \nonumber \\ \difft{x}_{9} &= & k_1 x_1 x_5 - k_3 x_9 - k_2 x_9 \nonumber \\ \difft{x}_{10} &= &k_7 x_1 x_5 - k_9 x_{10} - k_8 x_{10} \nonumber \\ \difft{x}_{11} &=& k_{13} x_4 x_6 - k_{15} x_{11} - k_{14} x_{11} \nonumber \\ \difft{x}_{12}& = & k_{23} x_2 x_6 - k_{25} x_{12} - k_{24} x_{12} \nonumber \\ \difft{x}_{13} &= &k_{15} x_{11} - k_{16} x_{13} + k_{17} x_3 x_6 \nonumber \\ \difft{x}_{14}& = & k_{18} x_3 x_6 - k_{20} x_{14} - k_{19} x_{14} \nonumber \\ \difft{x}_{15}& = &k_{20} x_{14} - k_{21} x_{15} + k_{22} x_1 x_6 \nonumber \\ \difft{x}_{16}& = &k_{25} x_{12} - k_{26} x_{16} + k_{27} x_1 x_6 \label{EQ:thesystem28} \end{eqnarray*} The estimates of the rate constants given in the Biomodels database are: \begin{align*} k_{1} &= 0.005,& k_{2} &= 1,& k_{3} &= 1.08,& k_{4} &= 0.025,\nonumber\\ k_{5} &= 1,& k_{6} &= 0.007,& k_{7} &= 0.05,& k_{8} &= 1,\nonumber\\ k_{9} &= 0.008,& k_{10} &= 0.005,& k_{11} &= 1,& k_{12} &= 0.45,\nonumber\\ k_{13} &= 0.045,& k_{14} &= 1,& k_{15} &= 0.092,& k_{16} &= 1,&\nonumber\\ k_{17} &= 0.01,& k_{18} &= 0.01,& k_{19} &= 1,& k_{20} &= 0.5,&\nonumber\\ k_{21} &= 0.086,& k_{22} &= 0.0011,& k_{23} &= 0.01,& k_{24} &= 1,&\nonumber\\ k_{25} &= 0.47,& k_{26} &= 0.14,& k_{27} &= 0.0018. \label{EQ:rcestimates28} \end{align*} Again, using the left-null space of the stoichiometric matrix under positive conditions as a conservation constraint \cite{Famili2003} we obtain the following \begin{eqnarray*} x_6 + x_{11} + x_{12} + x_{13}+ x_{14} + x_{15} + x_{16} &=& k_{28}, \nonumber \\ x_5 + x_7 + x_8 + x_9 + x_{10} &=& k_{29} ,\nonumber \\ x_1 + x_2 + x_3 + x_4 + x_7 + x_8 + x_9 + x_{10} + x_{11} + {} & &\nonumber \\ \quad x_{12} + x_{13} + x_{14} + x_{15} + x_{16} &=& k_{30}, \label{EQ:claws28} \end{eqnarray*} where $k_{28}$, $k_{29}$, $k_{30}$ are new constants computed from the initial data. We formulate the real algebraic problem as described at the end of Sect.~\ref{SEC:Sys26Def}. In particular, note that we need positivity conditions for all variables and parameters. \section{Graph-Theoretical Symbolic Preprocessing} The complexity, primarily in terms of dimension, of polynomial systems obtained with steady-state approximations of biological models plus conservation laws is comparatively high for the application of symbolic methods. It is therefore highly relevant for the success of such methods to identify and exploit particular structural properties of the input. Our models have remarkably low total degrees with many linear monomials after some substitutions for rate constants. This suggests to preprocess with essentially Gaussian elimination in the sense of solving single suitable equations with respect to some variable and substituting the corresponding solution into the system. Generalizing this idea to situations where linear variables have parametric coefficients in the other variables requires, in general, a parametric variant of Gaussian elimination, which replaces the input system with a finite case distinction with respect to the vanishing of certain coefficients and one reduced system for each case. With Biomod-26\xspace and Biomod-28\xspace considered here it turns out that the positivity assumptions on the variables are strong enough to effectively guarantee the non-vanishing of all relevant coefficients so that case distinctions are never necessary. On the other hand, those positivity conditions establish an apparent obstacle, because we are formally not dealing with a parametric system of linear equations but with a parametric linear programming problem. However, here the theory of real quantifier elimination by virtual substitution tells us that it is sufficient that the inequality constraints play a passive role. Those constraints must be considered when substituting Gauss solutions from the equations, but otherwise can be ignored \cite{LoosWeispfenning:93a,Kosta:16a}. Parametric Gaussian elimination can increase the degrees of variables in the parametric coefficient, in particular destroying their linearity and suitability to be used for further reductions. As an example consider the steady-state approximation, i.e., all left hand sides replaced with $0$, of the system in (\ref{EQ:thesystem26}), solving the last equation for $x_5$, and substituting into the first equation. The natural question for an optimal strategy to Gauss-eliminate a maximal number of variables has been answered positively only recently~\cite{Grigoriev2015}: draw a graph, where vertices are variables and edges indicate multiplication between variables within some monomial. Then one can Gauss-eliminate a \emph{maximum independent set}, which is the complement of a \emph{minimum vertex cover}. Fig.~\ref{fig:vc} shows that graph for Biomod-26\xspace, where $\{x_4,x_5\}$ is a minimal vertex cover, and all other variables can be linearly eliminated. Similarly, for Biomod-28\xspace we find $\{x_5,x_6\}$ as a minimum vertex cover. Recall that minimum vertex cover is one of Karp's 21 classical NP complete problems \cite{Karp:72}. However, our instances considered here and instances to be expected from other biological models are so small that the use of existing approximation algorithms \cite{Grandoni2008} appears unnecessary. We have used real quantifier elimination, which did not consume measurable CPU time; alternatively one could use integer linear programming or SAT-solving. \begin{figure}[t] \centering \begin{tikzpicture}[scale=.7,auto=left,every node/.style={circle,fill=blue!20,minimum size = 2.5em}] \node (n3) at (1,20) {$x_3$}; \node (n5) at (3,20) {$x_5$}; \node (n1) at (5,21) {$x_1$}; \node (n2) at (5,19) {$x_2$}; \node (n4) at (7,20) {$x_4$}; \node (n6) at (9,21) {$x_6$}; \node (n7) at (11,21) {$x_7$}; \node (n8) at (13,21) {$x_8$}; \node (n9) at (9,19) {$x_9$}; \node (n10) at (11,19) {$x_{10}$}; \node (n11) at (13,19) {$x_{11}$}; \foreach \from/\to in {n3/n5,n1/n5,n1/n4,n2/n4,n2/n5} \draw (\from) -- (\to); \end{tikzpicture} \caption{The graph for Biomod-26\xspace is loosely connected. Its minimum vertex cover $\{x_4,x_5\}$ is small. All other variables form a maximum independent set, which can be eliminated with linear methods.\label{fig:vc}} \end{figure} It is a most remarkable fact that a significant number of biological models in the databases have that property of loosely connected variables. This phenomenon resembles the well-known \emph{community structure} of propositional satisfiability problems, which has been identified as one of the key structural reasons for the impressive success of state-of-the-art CDCL-based SAT solvers \cite{girvan2002community}. We conclude this section with the reduced systems as computed with our implementation in Redlog~\cite{DolzmannSturm:97a}. For Biomod-26\xspace we obtain $x_{5} >0$, $x_{4} > 0$, $k_{19} > 0$, $k_{18}> 0$, $k_{17} > 0$ and {\small\begin{eqnarray*} 1062444 k_{18} x_{4}^{2} x_{5} + 23478000 k_{18} x_{4}^{2} + 1153450 k_{18} x_{4} x_{5}^{2} + 2967000 k_{18} x_{4} x_{5} &&\\ {} + 638825 k_{18} x_{5}^{3} + 49944500 k_{18} x_{5}^{2} - 5934 k_{19} x_{4}^{2} x_{5} - 989000 k_{19} x_{4} x_{5}^{2}&&\\ {} - 1062444 x_{4}^{3} x_{5} - 23478000 x_{4}^{3} - 1153450 x_{4}^{2} x_{5}^{2}- 2967000 x_{4}^{2} x_{5}&&\\ {} - 638825 x_{4} x_{5}^{3} - 49944500 x_{4} x_{5}^{2} &=& 0,\\ 1062444 k_{17} x_{4}^{2} x_{5} + 23478000 k_{17} x_{4}^{2} + 1153450 k_{17}x_{4} x_{5}^{2} + 2967000 k_{17} x_{4} x_{5}&&\\ {}+ 638825 k_{17} x_{5}^{3} + 49944500 k_{17} x_{5}^{2} - 1056510 k_{19} x_{4}^{2} x_{5} - 164450 k_{19} x_{4} x_{5}^{2}&&\\ {}- 638825 k_{19} x_{5}^{3} - 1062444 x_{4}^{2} x_{5}^{2} - 23478000 x_{4}^{2} x_{5} - 1153450 x_{4} x_{5}^{3}&&\\ {} - 2967000 x_{4} x_{5}^{2} - 638825 x_{5}^{4} - 49944500 x_{5}^{3} &=& 0. \end{eqnarray*}} For Biomod-28\xspace we obtain $x_{6} >0$, $x_{5} > 0$, $k_{30} > 0$, $k_{29}> 0$, $k_{28} > 0$ and {\small\begin{eqnarray*} 3796549898085 k_{29} x_{5}^{3} x_{6} + 71063292573000 k_{29} x_{5}^{3} + 106615407090630 k_{29} x_{5}^{2} x_{6}^{2}&&\\ {}+ 479383905861000 k_{29} x_{5}^{2} x_{6} + 299076127852260 k_{29} x_{5} x_{6}^{3}&&\\ {}+ 3505609439955600 k_{29} x_{5} x_{6}^{2} + 91244417457024 k_{29} x_{6}^{4}&&\\ {}+ 3557586742819200 k_{29} x_{6}^{3} - 598701732300 k_{30} x_{5}^{3} x_{6}&&\\ {} - 83232870778950 k_{30} x_{5}^{2} x_{6}^{2} - 185019487578700 k_{30} x_{5}x_{6}^{3}&&\\ - 3796549898085 x_{5}^{4} x_{6} - 71063292573000 x_{5}^{4} - 106615407090630 x_{5}^{3} x_{6}^{2}&&\\ {} - 479383905861000 x_{5}^{3} x_{6} - 299076127852260 x_{5}^{2} x_{6}^{3} - 3505609439955600 x_{5}^{2} x_{6}^{2}&&\\ {}- 91244417457024 x_{5} x_{6}^{4} - 3557586742819200 x_{5} x_{6}^{3} &=& 0, \\ 3796549898085 k_{28} x_{5}^{3} x_{6} + 71063292573000 k_{28} x_{5}^{3} + 106615407090630 k_{28} x_{5}^{2} x_{6}^{2}&&\\ {}+ 479383905861000 k_{28} x_{5}^{2} x_{6} + 299076127852260 k_{28} x_{5} x_{6}^{3}&&\\ {}+ 3505609439955600 k_{28} x_{5} x_{6}^{2} + 91244417457024 k_{28} x_{6}^{4}&&\\ {}+ 3557586742819200 k_{28} x_{6}^{3} - 3197848165785 k_{30} x_{5}^{3} x_{6}&&\\ {} - 23382536311680 k_{30} x_{5}^{2} x_{6}^{2} - 114056640273560 k_{30} x_{5} x_{6}^{3}&&\\ {}- 91244417457024 k_{30} x_{6}^{4} - 3796549898085 x_{5}^{3} x_{6}^{2} - 71063292573000 x_{5}^{3} x_{6}&&\\ {}- 106615407090630 x_{5}^{2} x_{6}^{3} - 479383905861000 x_{5}^{2} x_{6}^{2} - 299076127852260 x_{5} x_{6}^{4}&&\\ {} - 3505609439955600 x_{5} x_{6}^{3} - 91244417457024 x_{6}^{5} - 3557586742819200 x_{6}^{4} &=& 0. \end{eqnarray*}}% Notice that no complex positivity constraints come into existence with these examples. All corresponding substitution results are entailed by the other constraints, which is implicitly discovered by using the standard simplifier from \cite{DolzmannSturm:97c} during preprocessing. \section{Determination of Multiple Steady States} \label{SEC:Grid} We aim to identify via grid sampling regions of parameter space where multistationarity occurs. Our focus is on the identification of regions with multiple positive real solutions for the parameters introduced with the conservation laws. We will encounter one or three such solutions and allow ourselves for biological reasons to assume monostability or bistability, respectively. Furthermore, a change in the number of solutions between one and three is indicative of a saddle-node bifurcation between a monostable and a bistable case. A mathematically rigorous treatment of stability would, possibly symbolically, analyze the eigenvalues of the Jacobian of the respective polynomial vector field. We consider two different approaches: first a polynomial homotopy continuation method implemented in Bertini, and second a combination of symbolic computation methods implemented in Maple. We compare the approaches with respect to performance and quality of results for both the reduced and the unreduced systems. \subsection{Numerical Approach} \label{SEC:Bertini} We use the homotopy solver Bertini \cite{BHSW06} in its standard configuration to compute complex roots. We parse the output of Bertini using Python, and determined numerically, which of the complex roots are real and positive using a threshold of $10^{-6}$ for positivity. Computations are done in Python with Bertini embedded. For System Biomod-26\xspace we produced the two plots in Fig.~\ref{FIG:Bertini-Sys26-Original} using the original system and the two in Fig.~\ref{FIG:Bertini-Sys26-Reduced} using the reduced system. The sampling range for $k_{19}$ was from 200 to 1000 by 50. In the left plots the sampling range for $k_{17}$ is from 80 to 200 by 10 with $k_{18}$ fixed at 50. In the right plots the sampling range for $k_{18}$ is 5 to 75 by 5 with $k_{17}$ fixed to 100. We see two regions forming according to the number of fixed points: yellow discs indicate one fixed point and blue boxes three. The diamonds indicate numerical errors where zero (red) or two (green) fixed states were identified. We analyse these further in Sect.~\ref{SEC:comp}. For Biomod-28\xspace we produced the two plots in Fig.~\ref{FIG:Bertini-Sys28-Original} using the original system. The sampling range for $k_{30}$ was from 100 to 1600 by 100. In the left plots the sampling range for $k_{28}$ is from 40 to 160 by 10 with $k_{29}$ fixed at 180. In the right plots the sampling range for $k_{29}$ is from 120 to 240 by 10 with $k_{28}$ fixed to 100. The colours and shapes indicate the number of fixed points as before. For the reduced system Bertini (wrongly) could not find any roots (not even complex ones) for any of the parameter settings. The situation did not change when going from adaptive precision to a very high fixed precision. However, we have not attempted more sophisticated techniques like providing user homotopies. We analyse these results further in Sect.~\ref{SEC:comp}. \subsection{Symbolic Approach} \label{SEC:Maple} Our next approach will still use grid sampling, but each sample point will undergo a symbolic computation. The result will still be an approximate identification of the region (since the sampling will be finite) but the results at those sample points will be guaranteed free of numerical errors. The computations follow the strategy introduced in \cite[Section 2.1.2]{Bradford2017}. This combined tools from the Regular Chains Library\footnote{\url{http://www.regularchains.org/}} available for use in Maple. Regular chains are the triangular decompositions of systems of polynomial equations (triangular in terms of the variables in each polynomial). Highly efficient methods for working in complex space have been developed based on these (see \cite{Wang2000} for a survey). We make use of recent work by Chen et al.~\cite{CDMMXX13} which adapts these tools to the real analogue: semi-algebraic systems. They describe algorithms to decompose any real polynomial system into finitely many regular semi-algebraic systems: both directly and by computation of components by dimension. The latter (the so called \emph{lazy} variant) was key to solving the 1-parameter MAPK problem in \cite{Bradford2017}. However, for the zero dimensional computations of this paper there is only one solution component and so no savings from lazy computations. For a given system and sample point we apply the real triangularization (RT) on the quantifier-free formula (as described at the end of Sect.~\ref{SEC:Sys26Def}: a quantifier free conjunction of equities and inequalities) evaluated with the parameter estimates and sample point values. This produces a simplified system in several senses. First, as guaranteed by the algorithm, the output is triangular according to a variable ordering. So there is a univariate component, then a bivariate component introducing one more variable and so on. Secondly, for all the MAPK models we have studied so far, all but the final (univariate) of these equations has been linear in its main variable. This thus allows for easy back substitution. Thirdly, most of the positivity conditions are implied by the output rather than being an explicit part of it, in which case a simpler sub-system can be solved and back substitution performed instantly. \subsubsection{Biomod-26\xspace} For the original version of Biomod-26\xspace the output of RT was a component consisting of 11 equations and a single inequality. The equations were in ascending main variable according to the provided ordering (same as the labelling). All but the final equation is linear in its main variable, with the final equation being univariate and degree 6 in $x_1$. The output of the triangularization requires that this variable be positive, $x_1>0$, with the positivity of the other variables implied by solutions to the system. So to proceed we must find the positive real roots of the degree 8 univariate polynomial in $x_1$: counting these will imply the number of real positive solutions of the parent system. We do this using the root isolation tools in the Regular Chains Library. This whole process was performed iteratively for the same sampling regime as Bertini used to produce Fig.~\ref{FIG:Maple-Sys26}. We repeated the process on the reduced version of the system. The triangularization again reduced the problem to univariate real root isolation, this time with only one back substitution step needed. As to be expected from a fully symbolic computation, the output is identical and so again represented by Fig.~\ref{FIG:Maple-Sys26}. However, the computation was significantly quicker with this reduced system. More details are given in the comparison in Sect.~\ref{SEC:comp}. \subsubsection{Biomod-28\xspace} The same process was conducted on Biomod-28\xspace. As with Biomod-26\xspace the system was triangular with all but the final equation linear in its main variable; this time the final equation is degree 8. However, unlike Biomod-26\xspace two positivity conditions were returned in the output meaning we must solve a bivariate problem before we can back substitute to the full system. Rather than just perform univariate real root isolation we must build a Cylindrical Algebraic Decomposition (CAD) (see, e.g., \cite{BDEMW16} and the references within) sign invariant for the final two equations and interrogate its cells to find those where the equations are satisfied and variable positive. Counting these we find always 1 or 3 cells, with the latter indicating bistability. This is similar to the approach used in \cite{Bradford2017}, although in that case the 2D CAD was for one variable and one parameter. We used the implementation of CAD in the Regular Chains Library \cite{CMXY09,BCDEMW14} with the results producing the plots in Fig.~\ref{FIG:Maple-Sys28}. For the reduced system we proceeded similarly. A 2D CAD still needed to be produced after triangularization and so in this case there was no reduction in the number of equations to study with CAD via back substitution. However, it was still beneficial to pre-process CAD with real triangularization: the average time per sample point with pre-processing (and including time taken to pre-process) was 0.485 seconds while without it was 3.577 seconds \subsection{Comparison} \label{SEC:comp} \begin{figure}[p] \setlength{\abovecaptionskip}{5pt} \setlength{\belowcaptionskip}{10pt plus 5pt} \centering \includegraphics[width=0.44\textwidth]{Sys26-Bertini-Org-k17k19.png} \includegraphics[width=0.44\textwidth]{Sys26-Bertini-Org-k18k19.png} \caption{Bertini grid sampling on the original version of Biomod-26\xspace (see Sect.~\ref{SEC:Bertini})\label{FIG:Bertini-Sys26-Original}} \end{figure} \begin{figure}[p] \setlength{\abovecaptionskip}{5pt} \setlength{\belowcaptionskip}{10pt plus 5pt} \centering \includegraphics[width=0.44\textwidth]{Sys26-Bertini-Red-k17k19} \includegraphics[width=0.44\textwidth]{Sys26-Bertini-Red-k18k19} \caption{Bertini grid sampling on the reduced version of Biomod-26\xspace (see Sect.~\ref{SEC:Bertini})\label{FIG:Bertini-Sys26-Reduced}} \end{figure} \begin{figure}[p] \setlength{\abovecaptionskip}{5pt} \setlength{\belowcaptionskip}{0pt} \centering \includegraphics[width=0.44\textwidth]{Sys26-Maple-k17k19} \includegraphics[width=0.44\textwidth]{Sys26-Maple-k18k19} \caption{Maple grid sampling on Biomod-26\xspace (see Sect.~\ref{SEC:Maple})\label{FIG:Maple-Sys26}} \end{figure} \begin{figure}[p] \setlength{\abovecaptionskip}{5pt} \setlength{\belowcaptionskip}{10pt plus 5pt} \centering \includegraphics[width=0.44\textwidth]{Sys28-Bertini-Org-k28k30} \includegraphics[width=0.44\textwidth]{Sys28-Bertini-Org-k29k30} \caption{Bertini grid sampling on the original version of Biomod-28\xspace (see Sect.~\ref{SEC:Bertini})\label{FIG:Bertini-Sys28-Original}} \end{figure} \begin{figure}[p] \setlength{\abovecaptionskip}{5pt} \setlength{\belowcaptionskip}{10pt plus 5pt} \centering \includegraphics[width=0.44\textwidth]{Sys28-Maple-k28k30} \includegraphics[width=0.44\textwidth]{Sys28-Maple-k29k30} \caption{Maple grid sampling on Biomod-28\xspace (see Sect.~\ref{SEC:Maple})\label{FIG:Maple-Sys28}} \end{figure} \begin{figure}[p] \setlength{\abovecaptionskip}{5pt} \setlength{\belowcaptionskip}{0pt} \centering \includegraphics[width=0.44\textwidth]{Sys28-Detailed-k28k30} \includegraphics[width=0.44\textwidth]{Sys28-Detailed-k29k30} \caption{As Fig.~\ref{FIG:Maple-Sys28} but with a higher sampling rate\label{FIG:Sys28Detailed}} \end{figure} Figure~\ref{FIG:Bertini-Sys26-Original}, Fig.~\ref{FIG:Bertini-Sys26-Reduced}, and Fig.~\ref{FIG:Maple-Sys26} all refer to Biomod-26\xspace. The latter, produced using the symbolic techniques in Maple, is guaranteed free of numerical error. We see that computing with the reduced system rather than the original system allowed Bertini to avoid such errors: the rouge red and green diamonds in Fig.~\ref{FIG:Bertini-Sys26-Original}. However, in the case of Biomod-28\xspace the reduction led to catastrophic effects for Bertini: built-in heuristics quickly (and wrongly) concluded that there are no zero dimensional solutions for the system, and when switching to a positive dimensional run also no solutions could be found. Bertini computations (v1.5.1) were carried out on a Linux 64 bit Desktop PC with Intel i7. Maple computations (v2016 with April 2017 Regular Chains) were carried out on a Windows 7 64 bit Desktop PC with Intel i5. For Biomod-26\xspace the pairs of plots together contain 476 sample points. Table~\ref{TAB:SysTime} shows timing data. We see that both Bertini and Maple benefited from the reduced system: Bertini took a third of the original time while the speedup for Maple was even greater: a tenth of the original. Also, perhaps surprisingly, the symbolic methods were quicker than the numerical ones here. For Biomod-28\xspace the speed-up enjoyed by the symbolic methods was even greater (almost 100 fold). However, for this system Bertini was significantly faster. The symbolic methods used are well known for their doubly exponential computational complexity (in the number of variables) so it is not surprising that as the system size increases there so should the results of the comparison. We see some other statistical data for the timings in Maple: the standard deviation for the timings is fairly modest but in each row we see there are outliers many multiples of the mean value and so the median is always a little less than the mean average. \begin{table}[t] \addtolength{\tabcolsep}{0.3em} \centering \caption{Timing data (in seconds) of the grid samplings described in Sect.~\ref{SEC:Grid}. Numerical computation is using Bertini; Symbolic computation is using Maple Regular Chains\label{TAB:SysTime}} \begin{tabular}{lr@{\qquad}rrrrrrr} & \multicolumn{1}{c@{\qquad}}{\textbf{Numerical}} & \multicolumn{4}{c}{\textbf{Symbolic}}\\ & Mean & Mean & Median & StdDev & Maximum \\ 026 -- Original & 2.4~ & 0.568 & 0.530 & 0.107 & 0.905 \\ 026 -- Reduced & 0.85 & 0.053 & 0.047 & 0.036 & 0.343 \\ 028 -- Original & 16.57 & 42.430 & 40.529 & 8.632 & 84.116 \\ 028 -- Reduced & $\bot$ & 0.485 & 0.468 & 0.119 & 0.796 \end{tabular} \end{table} \subsection{Going Further} \label{SEC:3d} Of course, we could increase the sampling density to get an improved idea of the bistability region, as in Fig.~\ref{FIG:Sys26Detailed} and Fig.~\ref{FIG:Sys28Detailed}. However, a greater understanding comes with 3D sampling. We have performed this using the symbolic approach described above, at a linear cost proportional to the increased number of sample points. This was completed for Biomod-26\xspace: the region in question is bounded to both sides in the $k_{17}$ and $k_{18}$ directions but extends infinitely above in $k_{19}$. With the $k_{19}$ range bound at 1000 the region is bounded by extending $k_{17}$ to 800 and $k_{18}$ to 600. For obtaining exact bounds (in one parameter) see \cite{Bradford2017}. Sampling in 20 seconds for $k_{17}$ and $k_{18}$ and 50 seconds for $k_{19}$ produced a Maple point plot of 20400 in 18 minutes. Figure \ref{FIG:3dPointPlot} shows 2D captures of the 3D bistable points and Fig.~\ref{FIG:3dConvexHull} the convex hull of these, produced using the convex package\footnote{\url{http://www.math.uwo.ca/~mfranz/convex/}}. We note the lens shape seen in the orientation in the left plots is comparable with the image in the original paper of Markevich et al.~\cite[Fig.~S7]{Markevich2004}. \begin{figure}[p] \centering \includegraphics[width=0.44\textwidth]{Sys26-Detailed-k17k19.png} \includegraphics[width=0.44\textwidth]{Sys26-Detailed-k18k19.png} \caption{As Fig.~\ref{FIG:Maple-Sys26} but with a higher sampling rate\label{FIG:Sys26Detailed}} \end{figure} \begin{figure}[p] \centering \includegraphics[height=0.25\textheight, width=0.45\textwidth]{Maple-Sys26-3dPointPlot4} \includegraphics[height=0.25\textheight, width=0.45\textwidth]{Maple-Sys26-3dPointPlot3} \caption{3D Maple Point Plot produced grid sampling on Biomod-26\xspace (see Sect.~\ref{SEC:3d})\label{FIG:3dPointPlot}} \end{figure} \begin{figure}[p] \centering \includegraphics[height=0.25\textheight, width=0.45\textwidth]{Maple-Sys26-3dConvexHull1} \includegraphics[height=0.25\textheight, width=0.45\textwidth]{Maple-Sys26-3dConvexHull3} \caption{Convex Hull of the bistable points in Fig.~\ref{FIG:3dPointPlot}\label{FIG:3dConvexHull}} \end{figure} \section{Conclusion and Future Work} We described a new graph theoretical symbolic preprocessing method to reduce problems from the MAPK network. We experimented with two systems and found the reduction offered computation savings to both numerical and symbolic approaches for the determination of multistationarity regions of parameter space. In addition, the reduction avoided instability from rounding errors in the numerical approach \pagebreak to one system, but uncovered major problems in that approach for the other. An interesting side result is that, at least for the smaller system, the symbolic approach can compete with and even outperform the numerical one, demonstrating how far such methods have progressed in recent years. In future work we intend to combine the results of the present paper and our recent publication \cite{Bradford2017} to generate symbolic descriptions of the bistability region beyond the 1-parameter case. Other possible routes to achieve this is to consider the effect of the various degrees of freedom with the algorithms used. For example, we have a free choice of variable ordering: Biomod-26\xspace has 11 variables corresponding to 39\,916\,800 possible orderings while Biomod-28\xspace has 16 variables corresponding to more than $10^{13}$ orderings. Heuristics exist to help with this choice \cite{DSS:04a} and machine learning may be applicable \cite{HEWDPB14}. Also, since MAPK problems contain many equational constraints an approach as described in \cite{EBD15} may be applicable when higher dimensional CADs are needed. \section {Acknowledgements} D.~Grigoriev is grateful to the grant RSF 16-11-10075. H.~Errami, O.~Radulescu, and A.~Weber thank the French-German Procope-DAAD program for partial support of this research. M.~England and T.~Sturm are grateful to EU H2020-FETOPEN-2015-CSA 712689 SC\textsuperscript{2}. \vspace*{0.1in} \textbf{Research Data Statement:} Data supporting the research in this paper is available from \href{http://doi.org/10.5281/zenodo.807678}{doi:10.5281/zenodo.807678}. \bibliographystyle{splncs_srt}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,960
\section{Introduction} \label{sec:Introduction} Model predictive control (MPC) is an advanced control strategy which is very prevalent in the current literature due to its inherent capability of providing constraint satisfaction while ensuring asymptotic stability of the target equilibrium point. In MPC, the control law is derived from an optimization problem in which a prediction model is used to predict the future evolution of the system over a prediction horizon \cite{Camacho_S_2013}. In order to provide asymptotic stability of the closed loop system, two ingredients are typically added to the MPC formulation: the \textit{terminal cost}, which penalizes a certain measure of discrepancy between the reference and the \textit{terminal state} (i.e. the predicted state at the end of the prediction horizon); and the \textit{terminal set}, which is computed as a positive invariant set of the closed loop system for the given reference \cite{Rawlings_MPC_2017}. Stability is ensured by imposing the terminal state to lie within the terminal set by the addition of a \textit{terminal constraint} to the MPC formulation. The use of a terminal set and terminal constraint leads to two downsides when the reference to be tracked can change online. The first issue is that the terminal set must be recomputed for the reference every time it changes. If there are a known-before-hand, finite number of references, then a terminal set can be computed offline for each one of them. Otherwise, it must be computed online each time the reference changes, which is typically very computationally demanding. The second issue is that the feasibility of the MPC problem can be lost in the event of a reference change, i.e. there may not be a feasible solution of the MPC optimization problem for the current state and the new reference. This issue is related to the domain of attraction of the MPC controller, i.e. the set of states for which the closed loop system is asymptotically stabilizable, since the feasibility is lost when the initial state is out of the domain of attraction of the MPC controller for the new reference. The terminal constraint is the main contributor of this issue when the prediction horizon is not large enough. To see this, note that the predicted state must be able to reach the terminal set within the prediction horizon window and that systems are typically subject to input constraints. These issues are of particular relevance when dealing with the online implementation of MPC in embedded systems. The severely limited computational and memory resources of these systems make them unsuitable for large prediction horizons and for the computation of positive invariant sets online. Possible solutions to mitigate this are to use \textit{explicit} MPC \cite{Bemporad_explicit_2019, Zeilinger_TAC_2011} or to avoid the computation of a positive invariant set by using a singleton as the terminal set as in \cite{Krupa_ECC_18}. However, the former approach does not scale well with the dimension of the system, and the latter may require a prohibitively large value of the prediction horizon in order to provide good closed loop performance and not suffer a loss of feasibility in the event of reference changes. There are plenty of other publications on the implementation of MPC in embedded systems, e.g. \cite{Huyck_MED_2012, Hartley_IETCST_2014, Shukla_SD_2017, Lucia_IETII_2018, Jerez_IETAC_2014} and \cite{Wang_TCST_2010}. However, the issues that arise when dealing with small prediction horizons, the recursive feasibility of the MPC controller, or the issue of the online computation of terminal sets are rarely discussed in in detail in this particular field. Another possible approach would be to use a formulation such as the \textit{MPC for tracking} (MPCT) \cite{Limon_A_2008, Ferramosca_A_2009}, which incorporates a steady state artificial reference into the optimization problem as a decision variable. This formulation offers a significant increase of the domain of attraction when compared to standard MPC formulations and only requires the computation of a single terminal set, valid for all references. Additionally, the asymptotic stability and recursive feasibility of the controller is guaranteed, even in the event of a sudden change of the reference \cite{Limon_TAC_2018}. However, as we show in Section \ref{sec:HMPC:performance}, the closed loop performance of the controller can suffer in certain systems if the prediction horizon is too small. In this paper we present an MPC formulation which we call \textit{harmonic based model predictive control for tracking} and label by \textit{HMPC}. This formulation, which was initially introduced in \cite{Krupa_CDC_19}, is of particular interest when dealing with short prediction horizons, as might be the case when working with embedded systems. As shown in the preliminary results \cite{Krupa_CDC_19}, it attains even greater domains of attraction than MPCT or other standard MPC controllers. Additionally, as we shown and discuss in this paper by means of a case study using a ball and plate system, the HMPC controller can show a significant performance improvement when the prediction horizon is small. The improvement can be particularly significant for systems with integrator states and/or systems subject to slew rate constraints on its inputs, as is often the case with robotic and mechatronic systems. The idea behind this formulation is to substitute the artificial reference of the MPCT formulation by an \textit{artificial harmonic reference}, i.e. a periodic reference signal that is composed of a sine term, a cosine term and a constant. The inclusion of this artificial harmonic reference is heavily influenced by the extensions of the MPCT formulation to tracking periodic references \cite{Limon_MPCTP_2016, Kohler_NMPC_18}. However, in this case, the reference to be tracked is a (piecewise) constant set-point. The control law of HMPC is derived from the solution of a second order cone programming problem. This class of convex optimization problem is common in the literature and can be solved by several efficient algorithms \cite{Domahidi_ECOS_2013, Garstka_COSMO_19}. In particular, we use the solver COSMO \cite{Garstka_COSMO_19}. A key property of the HMPC controller is that retains the recursive feasibility and asymptotic stability features of the MPCT formulation, even in the event of reference changes, as we formally prove in this paper. Moreover, as is also the case with certain versions of the MPCT formulation (in particular the one we highlight in this manuscript), it does not require the computation of a terminal set nor terminal cost. This paper extends the results of \cite{Krupa_CDC_19} by showing the performance advantages of the HMPC controller, formally proving its asymptotic stability and by including the proof of its recursive feasibility. Additionally, we provide some guidelines for the design of one of its main ingredients: the frequency of the artificial harmonic reference. The paper is organized as follows. Section \ref{sec:Problem:Formulation} describes the class of system under consideration and control objective. The MPCT controller is described in Section \ref{sec:MPCT}. The proposed controller is presented in Section \ref{sec:HMPC}, with the theorems stating its recursive feasibility and asymptotic stability. A comparison of the closed loop performance of these two controllers is presented in Section \ref{sec:HMPC:performance}. Guidelines for the selection of the frequency of the artificial harmonic reference are shown in Section \ref{sec:selection:w}. Finally, conclusions are drawn in Section \ref{sec:conclusions}. \subsubsection*{Notation} The relative interior of a set $\cc{X}$ is denoted by $\ri{\cc{X}}$. The set of integer numbers is denoted by $\N$. Given two integers $i$ and $j$ with ${j \geq i}$, $\N_i^j$ denotes the set of integer numbers from $i$ to $j$, i.e. ${\N_i^j \doteq \{i, i+1, \dots, j-1, j\}}$. Given two vectors $x$ and $y$, $x \leq (\geq) \; y$ denotes componentwise inequalities. The set of positive definite matrices of dimension $n$ is given by $\Sp{n}$, whereas $\Dp{n}$ is the set of \textit{diagonal} positive definite matrices of dimension $n$. Given vectors $x_j$ defined for a (finite) index set $j \in \cc{J} \subset \N$, we denote by a bold $\vv{x}$ their Cartesian product. We denote a (non-finite) sequence of vectors $x_j$ indexed by $j \in \N$ by $\{x\}$. Given a vector $x\inR{n}$, we denote its $i$-th component using a parenthesized subindex $x_{(i)}$. The set of non-negative real numbers is denoted by $\R_+$. A function $\alpha: \R_+ \rightarrow \R$ is a $\cc{K}_\infty$-class function if it is continuous, strictly increasing, unbounded above and $\alpha(0) = 0$. Given a symmetric matrix $A$, we denote by $\lambda_\text{max}(A)$ and $\lambda_\text{min}(A)$ its largest and smallest eigenvalues, respectively. Given two vectors $x\inR{n}$ and $y\inR{n}$, their standard inner product is denoted by $\sp{x}{y} \doteq \Sum{i=1}{n} x_{(i)} y_{(i)}$. For a vector $x\inR{n}$ and a matrix $A\in\Sp{n}$, $\|x\| \doteq \sqrt{\sp{x}{x}}$ and $\|x\|_A$ denotes the weighted Euclidean norm $\|x\|_A \doteq \sqrt{\sp{x}{A x}}$. \section{Problem formulation} \label{sec:Problem:Formulation} We consider a controllable linear time-invariant system described by the following discrete state space model, \begin{subequations} \label{eq:Model} \begin{align} x_{k+1} &= A x_k + B u_k \label{eq:Model_x}\\ z_k &= C x_k + D u_k, \label{eq:Model_z} \end{align} \end{subequations} where $x_k \inR{n}$, $u_k \inR{m}$ and $z_k \inR{n_z}$ are the state, control input and constrained variables at sample time $k$, respectively. The constrained variables $z_k$ are subject to the following box constraint, \begin{equation} \label{eq:Constraints} z_k \in \cc{Z} \doteq \set{z\inR{n_z}}{z_m \leq z \leq z_M}, \end{equation} where $z_m$ and $z_M$ are the lower and upper bounds. In the following we will use the slight abuse of notation $(x, u) \in \cc{Z}$ to denote $C x + D u \in \cc{Z}$. We are interested in controllers capable of steering the system to the given reference $(x_r, u_r)$ while satisfying the system constraints \eqref{eq:Constraints}. This will only be possible if the reference is an \textit{admissible} steady state of the system, as defined in the following definition. Otherwise, we wish the system to be steered to the ``closest'' admissible steady state to the reference, for some given criterion of closeness. \begin{definition} \label{def:Admissible} An ordered pair $(x_a, u_a) \in \R^n \times \R^m$ is said to be admissible for system (\ref{eq:Model}) subject to (\ref{eq:Constraints}) if \bRlist \item $x_a = A x_a + B u_a$, i.e. it is a steady state of system (\ref{eq:Model}). \item $C x_a + D u_a \in \ri{\cc{Z}}$. \eRlist \end{definition} \begin{remark} \label{rem:Admissible} We note that the imposition of condition \textit{(ii)} in the previous definition, instead of $(x_a, u_a) \in \cc{Z}$, is necessary to avoid the possible controllability loss when the constraints are active at the equilibrium point \cite{Limon_A_2008}. In a practical setting, this restriction is typically substituted by defining vectors $\hat{z}_m \doteq z_m + \epsilon$ and $\hat{z}_M \doteq z_M - \epsilon$, where $\epsilon \inR{n_z}$ is some arbitrarily small positive vector, and imposing ${(x_a, u_a) \in \hat{\cc{Z}} \doteq \set{z}{\hat{z}_m \leq z \leq \hat{z}_M} \subset \ri{\cc{Z}}}$ instead. This way, we can work with the closed set $\hat{\cc{Z}}$ instead of the open set $\ri{\cc{Z}}$. \end{remark} \section{Model Predictive Control for tracking} \label{sec:MPCT} This section recalls the MPC \textit{for tracking} (MPCT) formulation \cite{Ferramosca_A_2009}, which is the basis of the controller we propose in Section \ref{sec:HMPC}. In MPCT, an artificial reference is included as an additional decision variable of the optimization problem. This inclusion provides several benefits, such as \textit{(i)} a significant increase of the domain of attraction w.r.t. standard MPC formulations, \textit{(ii)} recursive feasibility, even in the event of reference changes, and \textit{(iii)} the use of a terminal set which is valid for any reference. In what follows, we present an MPCT formulation which uses a singleton as the terminal set and which does not require a terminal cost. For a given prediction horizon $N$, the MPCT control law for a given state $x$ and reference $(x_r, u_r)$ is derived from the solution of the following convex optimization problem labeled by $\lT(x; x_r, u_r)$, \begin{subequations} \label{eq:MPCT} \begin{align} \lT(x; x_r, u_r) \doteq &\min\limits_{\vv{x}, \vv{u}, x_a, u_a} \; J(\vv{x}, \vv{u}, x_a, u_a; x_r, u_r) \\ s.t.& \; x_{j+1} = A x_j + B u_j, \; j\in\N_0^{N-1} \\ & \; z_m \leq C x_j + D u_j \leq z_M, \; j\in\N_0^{N-1} \\ & \; x_0 = x \\ & \; x_N = x_a, \label{eq:MPCT:Terminal} \\ & \; x_a = A x_a + B u_a \label{eq:MPCT:Steady:State}\\ & \; \hat{z}_m \leq C x_a + D u_a \leq \hat{z}_M \label{eq:MPCT:z_a} \end{align} \end{subequations} where the decision variables are the predicted states and inputs $\vv{x} = ( x_0, \dots, x_{N-1} )$, $\vv{u} = ( u_0, \dots, u_{N-1} )$ and the artificial reference $(x_a, u_a)$. The cost function $J(\vv{x}, \vv{u}, x_a, u_a; x_r, u_r)$ is composed of two terms: the summation of stage costs \begin{equation*} \label{eq:State:Cost:MPCT} \ell_T(\vv{x}, \vv{u}, x_a, u_a) = \sum\limits_{j=0}^{N-1} \| x_j - x_a \|_{Q}^{2} + \sum\limits_{j=0}^{N-1} \| u_j - u_a \|_{R}^{2}, \end{equation*} which penalizes the distance between the predicted states $x_j$ and inputs $u_j$ with the artificial reference by means of the cost function matrices $Q \in \Sp{n}$ and $R \in \Sp{m}$; and the offset cost \begin{equation} \label{eq:Terminal:Cost:MPCT} V_T(x_a, u_a; x_r, u_r) = \| x_a - x_r \|^{2}_{T_a} + \| u_a - u_r \|^{2}_{S_a}, \end{equation} which penalizes the distance between the artificial reference $(x_a, u_a)$ and $(x_r, u_r)$ by means of the cost function matrices ${T_a \in \Sp{n}}$ and ${S_a \in \Sp{m}}$. In general, the terminal cost function can be any convex function \cite{Ferramosca_A_2009}. Since this paper is interested in MPC formulations suitable for their implementation in embedded systems, we take the simple quadratic function \eqref{eq:Terminal:Cost:MPCT}. Note that equations \eqref{eq:MPCT:Steady:State} and \eqref{eq:MPCT:z_a}, where $\hat{z}_m$ and $\hat{z}_M$ are obtained as in Remark \ref{rem:Admissible}, guarantee that $(x_a, u_a)$ is an admissible reference (see Def. \ref{def:Admissible}). \begin{remark} \label{rem:x_N} We note that even though $x_N$ is not a decision variable of the MPCT optimization problem (since it can be expressed as $A x_{N-1} + B u_{N-1}$), we have nevertheless included it in \eqref{eq:MPCT} in order to ease the notation. \end{remark} The MPCT formulation guarantees that the closed-loop system asymptotically converges to an admissible steady state of the system so long as the problem is initially feasible, regardless of whether or not the reference $(x_r, u_r)$ is admissible \cite{Limon_A_2008, Ferramosca_A_2009}. In fact, if the reference is not admissible, the system will converge to the admissible steady state that minimizes the terminal cost \eqref{eq:Terminal:Cost:MPCT}. \section{Harmonic based MPC for tracking} \label{sec:HMPC} This section presents the main contribution of the paper: a harmonic based MPC formulation for tracking. The idea behind this formulation is to substitute the artificial reference of MPCT with the artificial harmonic reference sequences $\{x_h\}$, $\{u_h\}$, whose values at each discrete time instance $j\in\N$ are given by, \begin{subequations} \label{eq:harmonic:signals} \begin{align} x_{hj}} % Notation for each element of the sequence x_{h,j &= \xe + x_s \sin(w ({j}{-}{N})) + x_c \cos(w ({j}{-}{N})), \label{eq:x_h}\\ u_{hj}} % Notation for each element of the sequence u_{h,j &= \ue + u_s \sin(w ({j}{-}{N})) + u_c \cos(w ({j}{-}{N})), \label{eq:u_h} \end{align} \end{subequations} where $w > 0$ is the base frequency. The harmonic sequences $\{x_h\}$ and $\{u_h\}$ are parameterized by decision variables $\xe$, $x_s$, ${x_c \inR{n}}$ and $u_e$, $u_s$, ${u_c \inR{m}}$. To simplify the text, we use the following notation, \begin{align*} &\xH \doteq (\xe, x_s, x_c) \in \R^n \times \R^n \times \R^n, \\ &\uH \doteq (\ue, u_s, u_c) \in \R^m \times \R^m \times \R^m, \end{align*} \vspace{-2em} \begin{align} \label{eq:def:z} \bmat{ccc} z_e & z_s & z_c \emat \doteq \bmat{cc} C & D \emat \bmat{ccc} \xe & x_s & x_c \\ \ue & u_s & u_c \emat. \end{align} For a given prediction horizon $N$ and base frequency $w$, the HMPC control law for a given state $x$ and reference $(x_r, u_r)$ is derived from the following second order cone programming problem labeled by $\lH(x; x_r, u_r)$, \begin{subequations} \label{eq:HMPC} \begin{align} \moveEq{-12} \lH(&x; x_r, u_r) \doteq \min\limits_{\vv{x},\vv{u}, \xH, \uH} \; \costFunH(\vv{x}, \vv{u}, \xH, \uH; x_r, u_r) \\ \moveEq{-12} &s.t.\; x_{j+1} = A x_j + B u_j, \; j\in\N_0^{N-1} \label{eq:HMPC:dynamics}\\ & z_m \leq C x_j + D u_j \leq z_M, \; j\in\N_0^{N-1} \label{ineq:HMPC:z:first}\\ & x_0 = x \label{eq:HMPC:cond:inic}\\ & x_N = \xe + x_c \label{eq:HMPC:xN}\\ & \xe = A \xe + B \ue \label{eq:HMPC:xe}\\ & x_s \cos(w) - x_c \sin(w) = A x_s + B u_s, \label{eq:HMPC:xs}\\ & x_s \sin(w) + x_c \cos(w) = A x_c + B u_c, \label{eq:HMPC:xc}\\ & \sqrt{ z_{s (i)}^2 + z_{c (i)}^2 } \leq z_{e (i)} - \hat{z}_{m (i)}, \; i\in\N_1^{n_z} \label{ineq:HMPC:z:minus}\\ & \sqrt{ z_{s (i)}^2 + z_{c (i)}^2 } \leq \hat{z}_{M (i)} - z_{e (i)}, \; i\in\N_1^{n_z}, \label{ineq:HMPC:z:plus} \end{align} \end{subequations} where $\vv{x} = \{ x_0, \dots, x_{N-1} \}$, $\vv{u} = \{ u_0, \dots, u_{N-1} \}$, and the cost function $\costFunH(\vv{x}, \vv{u}, \xH, \uH; x_r, u_r)$ is composed of two terms: the summation of stage costs \begin{equation*} \label{eq::HMPC:Stage:Cost} \stageCostH(\vv{x}, \vv{u}, \xH, \uH) = \Sum{j=0}{N-1} \| x_j - x_{hj}} % Notation for each element of the sequence x_{h,j \|_Q^2 + \| u_j - u_{hj}} % Notation for each element of the sequence u_{h,j \|_R^2, \end{equation*} where $Q\in\Sp{n}$ and $R\in\Sp{m}$; and the offset cost \begin{align} \label{eq:HMPC:Offset:Cost} \moveEq{-10} \offsetCostH(\xH, \uH &; x_r, u_r) = \| \xe - x_r \|_{T_e}^2 + \| \ue - u_r \|_{S_e}^2 \nonumber \\ &+ \| x_s \|_{T_h}^2 + \| x_c \|_{T_h}^2 + \| u_s \|_{S_h}^2 + \| u_c \|_{S_h}^2, \end{align} where $T_e\in\Sp{n}$, $T_h \in\Dp{n}$, $S_e\in\Sp{m}$, and $S_h \in\Dp{m}$. We note that Remark \ref{rem:x_N} is also applicable here. The optimal value of \eqref{eq:HMPC} for a given state $x$ and reference $(x_r, u_r)$ is denoted by $\lH^*(x; x_r, u_r) = \costFunH(\vv{x}^*, \vv{u}^*, \xH^*, \uH^*)$, where $\vv{x}^*$, $\vv{u}^*$, $\xH^*$, $\uH^*$ are the arguments that minimize \eqref{eq:HMPC}. At each sample time $k$, the HMPC control law is given by $u_k = u_0^*$, obtained from the solution of $\lH(x_k; x_r, u_r)$. Note that the constraints \eqref{eq:HMPC:dynamics}-\eqref{ineq:HMPC:z:plus} do not depend on the reference. Therefore, the feasibility region of the HMPC controller, i.e. the set of states $x$ for which $\lH(x; x_r, u_r)$ is feasible, is independent of the reference. As such, feasibility is never lost in the event of reference changes. Theorem \ref{theo:HMPC:Stability} states the asymptotic stability of the HMPC controller to the \textit{optimal artificial harmonic reference}, which is defined and characterized below in Definition \ref{def:optimal:artificial:reference:HMPC}. In order to prove it, we first prove the recursive feasibility of the HMPC controller, which is stated in Theorem \ref{theo:Recursive:Feasibility}. This theorem was originally stated in \cite[Theorem 1]{Krupa_CDC_19} without its proof, which we include in Appendix \ref{app:proof:recursive:feasibility} of this manuscript. \begin{definition}[Optimal artificial harmonic reference] \label{def:optimal:artificial:reference:HMPC} Given a reference $(x_r, u_r)$, we define the \textit{optimal artificial harmonic reference} of the HMPC controller as the harmonic sequences $\{x_h} % Notation for the sequence of x_{h,j^\circ\}$, $\{u_h} % Notation for the sequence of u_{h,j^\circ\}$ (see \eqref{eq:harmonic:signals}) parameterized by the unique solution $(\xH^\circ, \uH^\circ)$ of the strongly convex optimization problem \begin{align} \label{eq:OP:optimal:harmonic:refrefence:HMPC} (\xH^\circ, \uH^\circ) = &\arg\min\limits_{\xH, \uH} \offsetCostH(\xH, \uH; x_r, u_r) \\ &s.t. \eqref{eq:HMPC:xe} \text{-} \eqref{ineq:HMPC:z:plus}. \nonumber \end{align} Additionally, we denote by $\offsetCostH^\circ(x_r, u_r) \doteq \offsetCostH(\xH^\circ, \uH^\circ; x_r, u_r)$ the optimal value of problem \eqref{eq:OP:optimal:harmonic:refrefence:HMPC}. \end{definition} The following lemma states that the \textit{optimal artificial harmonic reference} is in fact an admissible steady state of system \eqref{eq:Model} subject to \eqref{eq:Constraints}, i.e. $x_{hj}} % Notation for each element of the sequence x_{h,j^\circ = x_e^\circ$, $u_{hj}} % Notation for each element of the sequence u_{h,j^\circ = u_e^\circ$ and $(x_{hj}} % Notation for each element of the sequence x_{h,j^\circ, u_{hj}} % Notation for each element of the sequence u_{h,j^\circ) \in \hat{\cc{Z}}$, $\forall j\in\N$. \begin{lemma} \label{lemma:optimal:artificial:reference:HMPC} Consider optimization problem \eqref{eq:OP:optimal:harmonic:refrefence:HMPC}. Then, for any $(x_r, u_r)$, its optimal solution is the equilibrium point $(x_e^\circ, u_e^\circ) \in \hat{\cc{Z}}$ that minimizes $\| x_e - x_r \|^2_{T_e} + \| u_e - u_r \|^2_{S_e}$. That is, $\vv{x}_H^\circ = (x_e^\circ, 0, 0)$ and $\vv{u}_H^\circ = (u_e^\circ, 0, 0)$. \end{lemma} \begin{proof} We prove the lemma by contradiction. Assume that $\hat{\vv{x}}_H^\circ = (x_e^\circ, x_s^\circ, x_c^\circ)$, $\hat{\vv{u}}_H^\circ = (u_e^\circ, u_s^\circ, u_c^\circ)$, is the optimal solution of \eqref{eq:OP:optimal:harmonic:refrefence:HMPC} and that at least some (if not all) of $x_s^\circ$, $x_c^\circ$, $u_s^\circ$, $u_c^\circ \neq 0$. First, we show that $\vv{x}_H^\circ = (x_e^\circ, 0, 0)$, $\vv{u}_H^\circ = (u_e^\circ, 0, 0)$ satisfy \eqref{eq:HMPC:xe}-\eqref{ineq:HMPC:z:plus}. Constraints \eqref{eq:HMPC:xs} and \eqref{eq:HMPC:xc} are trivially satisfied and \eqref{eq:HMPC:xe} is satisfied since $(\hat{\vv{x}}_H^\circ, \hat{\vv{u}}_H^\circ)$ is assumed to be the solution of \eqref{eq:OP:optimal:harmonic:refrefence:HMPC}. Moreover, since $$0 \leq \sqrt{ (z_{s (i)}^\circ)^2 + (z_{c (i)}^\circ)^2},\; \forall i\in\N_1^{n_z},$$ we have that \eqref{ineq:HMPC:z:minus} and \eqref{ineq:HMPC:z:plus} are also satisfied for $(\vv{x}_H^\circ, \vv{u}_H^\circ)$. Finally, it is clear from the initial assumption and \eqref{eq:HMPC:Offset:Cost} that $$\offsetCostH(\vv{x}_H^\circ, \vv{u}_H^\circ; x_r, u_r) < \offsetCostH(\hat{\vv{x}}_H^\circ, \hat{\vv{u}}_H^\circ; x_r, u_r),$$ contradicting the optimality of $(\hat{\vv{x}}_H^\circ, \hat{\vv{u}}_H^\circ)$. The fact that $(x_e^\circ, u_e^\circ) \in \hat{\cc{Z}}$ follows from the satisfaction of \eqref{ineq:HMPC:z:minus}-\eqref{ineq:HMPC:z:plus}. \end{proof} The following theorem states the recursive feasibility of the HMPC controller. That is, suppose that a state $x$ belongs to the feasibility region of the HMPC controller. Then, for any feasible solution $\vv{x}$, $\vv{u}$, $\vv{x}_H$ and $\vv{u}_H$ of $\lH(x; x_r, u_r)$ we have that the successor state $A x + B u_0$ also belongs to the feasibility region of the HMPC controller. The first claim of the theorem states that feasible solutions of the HMPC controller provide constraint satisfaction for all future predicted states. The second claim states the recursive feasibility property of the HMPC controller. An important conclusion drawn from its proof (see also Properties \ref{prop:periodic:dynamics} and \ref{prop:bounds} in Appendix \ref{app:properties}) is that the artificial harmonic reference \eqref{eq:harmonic:signals} obtained from any feasible solution of $\lH(x; x_r, u_r)$ satisfies the system dynamics $x_{h, j+1} {=} A x_{hj}} % Notation for each element of the sequence x_{h,j + B u_{hj}} % Notation for each element of the sequence u_{h,j$ and constraints $(x_{hj}} % Notation for each element of the sequence x_{h,j, u_{hj}} % Notation for each element of the sequence u_{h,j) \in \hat{\cc{Z}}$, $\forall j \in \N$. \begin{theorem}[Recursive feasibility of the HMPC controller] \label{theo:Recursive:Feasibility} Suppose that $x$ belongs to the feasibility region of the HMPC controller. Suppose also that ${\bar{\vv{x}} = \{\bar{x}_0, \ldots, \bar{x}_{N-1}\}}$, ${\bar{\vv{u}} = \{\bar{u}_0, \ldots, \bar{u}_{N-1}\}}$, $\xe$, $x_s$, $x_c$, $\ue$, $u_s$, $u_c$, constitute a feasible solution to the constraints (\ref{eq:HMPC:dynamics}) to (\ref{ineq:HMPC:z:plus}). Then, \bRlist \item The control input sequence $\{u\}$ defined as \begin{equation} u_j = \bsis{l} \bar{u}_j, \; j\in\N_0^{N-1} \\ u_{hj}} % Notation for each element of the sequence u_{h,j, \; j \geq N, \\ \esis \label{eq:def:u:j} \end{equation} where $u_{hj}} % Notation for each element of the sequence u_{h,j$ is given by \eqref{eq:u_h}, and the trajectory $\{x\}$ defined as $x_0=x$, $$ x_{j+1} = A x_j + B u_j, \;j\geq 0, $$ satisfies \begin{equation} \label{ineq:z:totales} z_m \leq C x_j + D u_j \leq z_M , \forall j\geq 0. \end{equation} \item The successor state $Ax+B\bar{u}_0$ also belongs to the feasibility region of the HMPC controller. \eRlist \end{theorem} \begin{proof} \renewcommand{\qedsymbol}{} See Appendix \ref{app:proof:recursive:feasibility}. \end{proof} Theorem \ref{theo:HMPC:Stability} states the asymptotic stability of the HMPC controller to the \textit{optimal artificial harmonic reference} (Def. \ref{def:optimal:artificial:reference:HMPC}). The proof is based on the following well known Lyapunov stability theorem \cite[Appendix B.3]{Rawlings_MPC_2017}. \begin{theorem}[Lyapunov stability] \label{theo:Lyapunov:Stability} Consider an autonomous system $z_{k+1} = f(z_k)$ where the state $z_k \inR{n}$. Let $\Gamma$ be a positive invariant set and $\Omega \subseteq \Gamma$ be a compact set, both including the origin as an interior point. If there exists a function $W:\R^n \rightarrow \R_+$ and suitable $\cc{K}_\infty$-class functions $\alpha_1(\cdot)$ and $ \alpha_2(\cdot)$ such that, \bRlist \item $W(z_k) \geq \alpha_1(\| z_k \|), \; \forall z_k \in \Gamma$ \item $W(z_k) \leq \alpha_2(\| z_k \|), \; \forall z_k \in \Omega$ \item $W(z_{k+1}) - W(z_k) < 0, \forall z_k \in \Gamma \setminus \{0\}$ \\ and $W(z_{k+1}) - W(z_k) = 0$ if $z_k = 0$, \eRlist then $W(\cdot)$ is a Lyapunov function for $z_{k+1} = f(z_k)$ in $\Gamma$ and the origin is stable for all initial states in $\Gamma$. \end{theorem} \begin{theorem}[Asymptotic stability of the HMPC controller] \label{theo:HMPC:Stability} Consider a controllable system \eqref{eq:Model} subject to \eqref{eq:Constraints} and assume that $N$ is greater than its controllability index. Then, for any reference $(x_r, u_r)$ and initial state $x$ belonging to the feasibility region of the HMPC controller, the system controlled by the HMPC control law derived from the solution of \eqref{eq:HMPC} is stable, fulfills the system constraints \eqref{eq:Constraints} for all future time instants and asymptotically converges to the \textit{optimal artificial harmonic reference} $\{x_h} % Notation for the sequence of x_{h,j^\circ\} $, $\{u_h} % Notation for the sequence of u_{h,j^\circ\}$ (See Def. \ref{def:optimal:artificial:reference:HMPC}), which by Lemma \ref{lemma:optimal:artificial:reference:HMPC} is the admissible steady state $(x_e^\circ, u_e^\circ)$. \end{theorem} \begin{proof} \renewcommand{\qedsymbol}{} See Appendix \ref{app:proof:stability:HMPC}. \end{proof} Note that, as stated in Theorem \ref{theo:HMPC:Stability}, the HMPC controller provides asymptotic convergence to an admissible steady state regardless of whether the reference is itself an admissible steady state or not. As is also the case with the MPCT controller, it can be shown that the HMPC controller provides converge to the reference $(x_r, u_r)$ if it is admissible, and that it will converge to the steady state $(x_e^\circ, u_e^\circ)$ that minimizes the distance $\| x_e^\circ - x_r \|_{T_e}^2 + \| u_e^\circ - u_r \|_{S_e}^2$ otherwise \cite{Limon_TAC_2018}. \section{Closed-loop comparison between the HMPC and MPCT controllers} \label{sec:HMPC:performance} This section presents results of controlling a ball and plate system with the HMPC and MPCT controllers. The inclusion of the MPCT controller is done to highlight the fact that for certain systems, and especially for low values of the prediction horizon, its closed loop performance can suffer due to the fact that the predicted state $x_N$ must reach an admissible steady state of the system (see constraint \eqref{eq:MPCT:Terminal}). The results with the HMPC controller, and our subsequent discussion in Section \ref{sec:simulation}, suggest that HMPC may provide a significant improvement over MPCT in this regard. \subsection{Ball and plate system} \label{sec:ball:and:plate} The ball and plate system consists of a plate that pivots around its center point such that its slope can be manipulated by changing the angle of its two perpendicular axes. The objective is to control the position of a solid ball that rests on the plate. We assume that the ball is always in contact with the plate and that it does not slip when moving. The non-linear equations of the system are \cite{Wang_ISA_2014}, \begin{align*} \ddot{\zb}_1 &= \frac{m}{m + I_b/r^2} \left( \zb_1 \dot{\theta}_1^2 + \zb_2 \dot{\theta}_1 \dot{\theta}_2 + g \sin{\theta_1} \right) \\ \ddot{\zb}_2 &= \frac{m}{m + I_b/r^2} \left( \zb_2 \dot{\theta}_2^2 + \zb_1 \dot{\theta}_1 \dot{\theta}_2 + g \sin{\theta_2} \right), \end{align*} where $m$, $r$ and $I_b$ are the mass, radius and mass moment of inertia of a solid ball, respectively; $\zb_1$ and $\zb_2$ are the position of the ball on the two axes of the plate relative to its center point; $\dot{\zb}_1$, $\dot{\zb}_2$, $\ddot{\zb}_1$ and $\ddot{\zb}_2$ their corresponding velocities and accelerations; $\theta_1$ and $\theta_2$ are the angle of the plate on each of its axes; and $\dot{\theta}_1$ and $\dot{\theta}_2$ their corresponding angular velocities. The state of the system is given by \begin{equation*} x = (\zb_1, \dot{\zb}_1, \theta_1, \dot{\theta}_1, \zb_2, \dot{\zb}_2, \theta_2, \dot{\theta}_2), \end{equation*} and the control input $u = (\ddot{\theta}_1, \ddot{\theta}_2)$ is the angle acceleration of the plate in each one of its axes. We consider the following constraints on the velocity, angles and control inputs, \begin{align*} |\dot{\zb}_i| \leq 0.5\,\text{m/s}^2, \; |\theta_i| \leq \frac{\pi}{4}\,\text{rad}, \; |\ddot{\theta}_i| \leq 0.4\,\text{rad/s}^2, \; i \in\N_1^2. \end{align*} A linear time-invariant discrete-time model \eqref{eq:Model} of the system is obtained by linearizing its non-linear equations taking the origin as the operating point and discretizing with a sample time of $0.2$s. We will use this linear model as the prediction model of MPC controllers and as the model used to simulate the system. We take $m = 0.05\,$Kg, $r = 0.01\,$m, $g = 9.81\,$m/s$^2$ and $I_b = (2/5) m r^2 = 2\cdot 10^{-6}$Kg$\cdot$m$^2$. \subsection{Performance comparison between HMPC and MPCT} \label{sec:simulation} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{computation_times.eps} \caption{Computation times of the COSMO solver.} \label{fig:computation:times} \end{figure} \begin{figure*}[t] \centering \begin{subfigure}[ht]{0.48\textwidth} \includegraphics[width=\linewidth]{joint_pos_1.eps} \caption{Position of ball on axis 1.} \label{fig:comparison:position} \end{subfigure}% \hfil \begin{subfigure}[ht]{0.48\textwidth} \includegraphics[width=\linewidth]{joint_vel_1.eps} \caption{Velocity of ball on axis 1.} \label{fig:comparison:velocity} \end{subfigure}% \begin{subfigure}[ht]{0.48\textwidth} \includegraphics[width=\linewidth]{joint_input_1.eps} \caption{Control input on axis 1.} \label{fig:comparison:input} \end{subfigure}% \hfil \begin{subfigure}[ht]{0.48\textwidth} \includegraphics[width=\linewidth]{joint_positions.eps} \caption{Position of ball on the plate.} \label{fig:comparison:plate} \end{subfigure}% \caption{Closed-loop comparison between HMPC and MPCT.} \label{fig:comparison} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}[ht]{0.48\textwidth} \includegraphics[width=\linewidth]{HMPC_vel_iter.eps} \caption{HMPC: Velocity of ball on axis 1.} \label{fig:iter:HMPC:vel} \end{subfigure}% \hfil \begin{subfigure}[ht]{0.48\textwidth} \includegraphics[width=\linewidth]{HMPC_pos_iter.eps} \caption{HMPC: Position of ball on plate.} \label{fig:iter:HMPC:pos} \end{subfigure}% \begin{subfigure}[ht]{0.48\textwidth} \includegraphics[width=\linewidth]{MPCT_vel_iter.eps} \caption{MPCT: Velocity of ball on axis 1.} \label{fig:iter:MPCT:vel} \end{subfigure}% \hfil \begin{subfigure}[ht]{0.48\textwidth} \includegraphics[width=\linewidth]{MPCT_pos_iter.eps} \caption{MPCT: Position of ball on plate.} \label{fig:iter:MPCT:pos} \end{subfigure}% \caption{Snapshot of HMPC and MPCT at iteration 15.} \label{fig:iter} \end{figure*} We perform a closed-loop simulation of the ball and plate system with the MPCT and HMPC controllers. The system is initialized at the origin and the objective is to steer it to the position $\zb_1 = 1.8$, $\zb_2 = 1.4$, i.e. $$x_r = (1.8, 0, 0, 0, 1.4, 0, 0, 0), \; u_r = (0, 0).$$ The optimization problems of both controllers are solved using version v0.7.1 of the solver COSMO \cite{Garstka_COSMO_19}. The settings of the solver are set to the default values with the exception of the tolerances \texttt{eps\_abs}, \texttt{eps\_rel}, \texttt{eps\_prim\_inf} and \texttt{eps\_dual\_inf}, which were set to $10^{-5}$. The parameters of the controllers, which where manually tuned to provide an adequate closed-loop performance, are described in Table \ref{tab:MPC:parameters}. We compare the HMPC controller with $N=5$ to three MPCT controllers with prediction horizons $N= 5, 8, 15$. The prediction horizon $N=15$ was chosen by finding the lowest value for which the MPCT performed well. The performance is measured as $$\Phi \doteq \Sum{k=1}{N_\text{iter}} \| x_k - x_r \|^2_Q + \| u_k - u_r \|^2_R,$$ where $x_k$, $u_k$ are the states and control actions throughout the simulation and $N_\text{iter} = 50$ is the number of sample times. Table \ref{tab:performance:index:comparison} shows the performance index for each one of the controllers. {\renewcommand{\arraystretch}{1.2}% \begin{table}[t] \centering \begin{threeparttable} \caption{Parameters of the controllers} \label{tab:MPC:parameters} \begin{tabular}{llll} \toprule Parameter & \multicolumn{3}{l}{Value} \\ \midrule $Q$ & \multicolumn{3}{l}{\textit{diag}$( 10, 0.05, 0.05, 0.05, 10, 0.05, 0.05, 0.05)$} \\ $T_e$ & \multicolumn{3}{l}{\textit{diag}$( 600, 50, 50, 50, 600, 50, 50, 50)$} \\ \midrule Parameter & Value & Parameter & Value \\ \midrule $R$ & \textit{diag}$(0.5, 0.5)$ & $S_e$ & \textit{diag}$(0.3, 0.3)$ \\ $T_h$ & $T_e$ & $S_h$ & $0.5 S_e$ \\ $T_a$ & $T_e$ & $S_a$ & $S_e$ \\ $N$ & $5$ (HMPC), $8$ and $15$ & $\epsilon$ & $( 10^{-4}, 10^{-4}, 10^{-4})$\\ $w$ & $0.3254$ & & \\ \bottomrule \end{tabular} \begin{tablenotes}[flushleft] \footnotesize \item \textit{diag}$(\cdot)$ denotes a diagonal matrix with the indicated elements. \end{tablenotes} \end{threeparttable} \end{table}} {\renewcommand{\arraystretch}{1.1}% \begin{table}[t] \centering \caption{Performance comparison between controllers} \label{tab:performance:index:comparison} \begin{tabular}{ccccc} \toprule Controller & \multicolumn{3}{c}{MPCT} & HMPC \\ \cmidrule(lr){2-4}\cmidrule(lr){5-5} Prediction horizon $(N)$ & 5 & 8 & 15 & 5 \\ \midrule Performance $(\Phi)$ & $2014.1$ & $844.1$ & $488.9$ & $511.1$ \\ \bottomrule \end{tabular} \end{table}} Figure \ref{fig:comparison} shows the closed-loop simulation results for each controller. Figures \ref{fig:comparison:position} and \ref{fig:comparison:velocity} show the position and velocity of the ball on axis 1, i.e. $\zb_1$ and $\dot{\zb}_1$, respectively. Figure \ref{fig:comparison:input} shows the control input on axis 1, i.e. $\ddot{\theta}_1$. Finally, Figure \ref{fig:comparison:plate} shows the trajectory of the ball on the plate. The markers indicate the position of the ball at sample times $10$, $20$ and $30$ for each one of the controllers. The computation times of the HMPC controller and the MPCT controller with the prediction horizon $N=15$ are shown in Figure \ref{fig:computation:times}. Notice that the velocities obtained with the MPCT controllers with small prediction horizons are far away from its upper bound of $0.5$. The HMPC controller, on the other hand, reached much higher velocities even though its prediction horizon is also small. This results in a much faster convergence of the HMPC controller, as can be seen in Figures \ref{fig:comparison:position} and \ref{fig:comparison:plate}. If the prediction horizon of the MPCT controller is sufficiently large (e.g. $N=15$), then this issue no longer persists. To understand why this happens, let us compare the solution of the HMPC controller with the one of the MPCT controller with $N=8$. Figure \ref{fig:iter} shows a snapshot of sample time $15$ of the same simulation shown in Figure \ref{fig:comparison}. Lines marked with an asterisk are the past states from iteration $k = 0$ to the \textit{current} state at iteration $k = 15$, those marked with circumferences are the predicted states $\vv{x}$ for $j\in\N_0^N$, and those marked with dots are the artificial reference. The position of the markers line up with the value of the signals at each sample time, e.g. each asterisk marks the value of the state at each sample time $k\in\N_0^{15}$. Figures \ref{fig:iter:HMPC:vel} and \ref{fig:iter:MPCT:vel} show the velocity $\dot{\zb}_1$ of the ball on axis 1 for the HMPC and MPCT controllers, respectively. Figures \ref{fig:iter:HMPC:pos} and \ref{fig:iter:MPCT:pos} show the position of the ball on the plate. The reason why the velocity does not exceed ${\approx}{0.2}$ with the MPCT controller can be seen in Figure \ref{fig:iter:MPCT:vel}. The predicted states of the MPCT controller must reach a steady state at $j = N$ (see constraint \eqref{eq:MPCT:Terminal}). In our example this translates into the velocity having to be able to reach $0$ within a prediction window of length $N=8$. This is the reason that is limiting the velocity of the ball. A velocity of $0.5$ is not attainable with an MPCT controller with a prediction horizon of $N=8$ because there are no admissible control input sequences $\vv{u}$ capable of steering the velocity from $0.5$ to $0$ in $8$ sample times. This issue does not occur with the HMPC controller because it does not have to reach a steady state at the end of the prediction horizon, as can be seen in Figure \ref{fig:iter:HMPC:vel}. Instead, it must reach an admissible ``steady state" harmonic reference, which can have a non-zero velocity. It is clear from this discussion, and the results of the MPCT controller with $N=15$, that this issue will become less and less pronounced as the prediction horizon is increased. However, for low values of the prediction horizon, the HMPC controller can provide a significantly better performance than the MPCT controller, as shown in the example presented here. \begin{remark} \label{rem:MPCT:issue} The performance advantages of a (suitably tuned) HMPC are especially noticeable if the system presents integrator states and/or slew rate constraints, as is the case of the example shown above. However, the issue that affects the performance of the MPCT controller is that the state cannot ``move far away" from the subspace of steady states of the system due to the presence of input constraints coupled with the low prediction horizon. As such, the performance advantage of the HMPC controller may still be present in a wider range of systems. Moreover, the HMPC controller can be viewed as an MPCT with an added degree of freedom. Therefore, it can always be tuned to perform at the very least as well as the MPCT controller. In any case, as shown in \cite{Krupa_CDC_19}, the HMPC controller will provide an enlargement of the domain of attraction with respect to the MPCT controller. \end{remark} \section{Practical selection of parameter $w$} \label{sec:selection:w} This section discusses the selection of parameter $w$ of the HMPC controller, providing a simple, intuitive approach for its selection. It is important to note that the stability and recursive feasibility of the controller are satisfied for any value of $w$. However, the performance of the controller can be improved by a proper selection of this parameter. There are two main considerations to be made. The first one is related to the phenomenon of aliasing and of the selection of the sampling time for continuous-time systems. This will provide an upper bound to $w$. The second one is related to the frequency response of linear systems, which will provide some insight into the selection of an initial, and well suited, value of $w$. Subsequent fine tuning may provide better results, but this initial value of $w$ should work well in practice and provide a good starting point. \subsection{Upper bound of $w$} \label{sec:selection:w:upper:bound} The signals \eqref{eq:harmonic:signals} parametrized by any feasible solution $(\xH, \uH)$ of \eqref{eq:HMPC} satisfy the discrete-time system dynamics, as discussed in Section \ref{sec:HMPC}. Therefore, all that remains is to select $w$ small enough such that signals \eqref{eq:harmonic:signals} describe a suitably sampled signal. In order to prevent the aliasing phenomenon, $w$ must be chosen bellow the Nyquist frequency for anti-aliasing, i.e. ${w < \pi}$ \cite{Shannon_1949}. However, since the inputs are applied using a zero order holder, we would recommend taking \begin{equation} \label{eq:selection:w:upper:bound} w \leq \frac{\pi}{2}. \end{equation} In any case, the stability and recursive feasibility of the controller will not be lost, since Theorems \ref{theo:Recursive:Feasibility} and \ref{theo:HMPC:Stability} do not make any assumptions on the value of $w$, but the benefits of using HMPC instead of MPCT may be lost if this bound is not respected. Indeed, for $w = 2\pi$, HMPC is identical to MPCT. \subsection{Selection of a suitable $w$} \label{sec:selection:w:bode} There are three additional considerations to be made for selecting an adequate $w$: \textit{(i)} high frequencies equate fast system responses, \textit{(ii)} high frequencies tend to have small input-to-state gains, and \textit{(iii)} the presence of state constraints. At first glance, it would seem that selecting a high value of $w$ would lead to fast system responses. However, this need not be the case, since the gain of the system tends to diminish as the frequency of the input increases, i.e. if $w$ is selected in the \textit{high frequency} band of the system. If the gain is low, then $x_h} % Notation for the sequence of x_{h,j$ is very similar to a constant signal of value $\xe$, which results in HMPC behaving very similarly to the MPCT. Therefore, $w$ should be selected taking into account the gain of the system. A tentative lower bound for $w$ is then the highest frequency of the \textit{low frequency} band of the system. However, a final consideration can be made with regard to the system constraints as follows: the presence of constraints can override the desire for frequencies with large system gains. For instance, take as an example a system with a static gain of $4$ with an input $u$ subject to $|u| \leq 1$ and a state $x$ subject to $|x| \leq 2$. Then, selecting a $w$ whose Bode gain is close to the static gain of the system is not desirable because the amplitude of $u_h} % Notation for the sequence of u_{h,j$ will be limited by the constraints on $x_h} % Notation for the sequence of x_{h,j$. Therefore, we can select a higher frequency. In this case, a proper selection might be to chose $w$ as the frequency whose Bode gain is $2$. \begin{example} \normalfont As an example, take the case study of Section \ref{sec:HMPC:performance}. Figure \ref{fig:selection:w} shows the closed loop simulation of HMPC controllers using the parameters from Table \ref{tab:MPC:parameters} but different realizations of $w$. For the results shown in Section \ref{sec:simulation}, $w = 0.3254$ was selected as the cutting frequency of the Bode plot from $u$ to $\dot{z}$. It was chosen this way because of the constraints $|u| \leq 0.4$ and $|\dot{z}| \leq 0.5$. As shown in the figure, choosing a lower $w$, such as $w = 0.7 \cdot 0.3254 = 0.2278$ would have resulted in a higher gain, which would be pointless due to these constraints, and an overall slower convergence due to the slower frequency and the smaller amplitude of $\{u_h} % Notation for the sequence of u_{h,j\}$. On the other hand, choosing a higher $w$, such as $w = \pi/2$, leads to a small frequency gain. This results in a harmonic reference signal $\{x_h} % Notation for the sequence of x_{h,j\}$ that is very similar to a constant signal, leading to a poor performance. Finally, we show one of the possible undesirable effects of choosing a $w$ that does not satisfy \eqref{eq:selection:w:upper:bound}. In this case, selecting $w = 2 \pi$ makes the HMPC controller identical to the MPCT controller with the same prediction horizon. We should note that all the simulations shown in Figure \ref{fig:selection:w} eventually converge to the reference. \end{example} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{HMPC_compare_w.eps} \caption{Closed loop simulation for different realizations of $w$.} \label{fig:selection:w} \end{figure} \begin{remark} \label{rem:selection:w:MIMO} If the system has multiple states/inputs, then the above considerations should be made extrapolating the idea to the frequency response of MIMO systems. \end{remark} \section{Conclusions} \label{sec:conclusions} This paper presents a novel MPC formulation for tracking piece-wise constant references that can significantly outperform other MPC formulations in the case of small prediction horizons, as well as provide a larger domain of attraction. This is due to the fact that the terminal state does not need to reach a steady state of the system, but instead just needs to reach a periodic trajectory of the system given by a single harmonic signal. We find that the performance advantage is especially noticeable in systems with integrators and/or subject to slew rate constraints, which are very typical, for example, in robotic applications. The computation times needed to solve the HMPC problem with the COSMO solver suggest that its online implementation in embedded systems might be attainable, especially if a specialized solver is developed. Additionally, the controller does not require the computation of a terminal set, and its recursive feasibility (and asymptotic stability) is guaranteed even in the event of a reference change. These properties are welcome in any setting, but particularly so when dealing with embedded systems. \begin{appendix} \subsection{Collection of properties} \label{app:properties} This section contains three properties from the appendix of \cite{Krupa_CDC_19} which are used in various of the proofs of this manuscript. They are included here for completeness. \begin{property} \label{prop:simple:armonics} Let the elements $v_\ell \inR{n_v}$ of a sequence $\{v\}$ be given by $$v_\ell = v_e + v_s \sin(w \ell) + v_c \cos(w \ell),\; \forall \ell\in \N,$$ where $w\in \R$ and $v_e, v_s, v_c \inR{n_v}$. Then, $$ v_{\ell+1} = v_e + v_s^+ \sin(w \ell) + v_c^+ \cos(w \ell), \; \forall \ell \in \N,$$ where \begin{align*} v_s^+ &= v_s \cos(w) - v_c \sin(w), \\ v_c^+ &= v_s \sin(w) + v_c \cos(w). \end{align*} Moreover, $$(v_{s (i)}^+)^2 + (v_{c (i)}^+)^2 = v_{s (i)}^2 + v_{c (i)}^2, \; i\in\N_1^{n_v}.$$ \end{property} \begin{proof} The proof relies on the following well-known trigonometric identities \begin{align*} \sin(\alpha+\beta) &= \sin(\alpha)\cos(\beta)+ \cos(\alpha)\sin(\beta) \\ \cos(\alpha+\beta) &= \cos(\alpha)\cos(\beta)-\sin(\alpha)\sin(\beta). \end{align*} From these expressions we obtain \begin{align*} \sin(w (\ell+1)) &= \sin(w) \cos(w \ell) + \cos(w) \sin(w \ell) \\ \cos(w (\ell+1)) &= \cos(w) \cos(w \ell) - \sin(w) \sin(w \ell). \end{align*} Therefore, \begin{align*} v_{\ell+1}&= v_e + v_s \sin(w(\ell+1)) + v_c \cos(w(\ell+1)) \\ &= v_e + v_s \left[ \sin(w)\cos(w\ell) + \cos(w)\sin(w\ell) \right]\\ &\quad + v_c \left[ \cos(w)\cos(w\ell) - \sin(w)\sin(w\ell) \right] \\ &= v_e + \left[ v_s \cos(w) - v_c \sin(w) \right] \sin(w\ell) \\ &\quad + \left[ v_s \sin(w) + v_c \cos(w) \right] \cos(w\ell) \\ &= v_e + v_s^+ \sin(w\ell) + v_c^+ \cos(w\ell). \end{align*} This proves the first claim of the property. Denote now $$ \rm{H}_w \doteq \bmat{cc} \cos(w) & - \sin(w) \\ \sin(w) & \cos(w) \emat.$$ With this notation, $$ \bv v_{s (i)}^+ \\ v_{c (i)}^+ \ev = \rm{H}_w \bv v_{s (i)} \\ v_{c (i)} \ev, \; i\in\N_1^{n_v}.$$ From the identity $\sin^2(w)+\cos^2(w)=1$ we obtain $$\rm{H}_w\T \rm{H}_w = I_2.$$ We are now in a position to prove the last claim of the property. \begin{align*} &(v_{s (i)}^+)^2+ (v_{c (i)}^+)^2 = \left\| \bv v_{s (i)}^+ \\ v_{c (i)}^+ \ev \right\|^2 \\ & = \bv v_{s (i)} \\ v_{c (i)} \ev\T \rm{H}_w\T \rm{H}_w \bv v_{s (i)} \\ v_{c (i)} \ev \\ & = \bv v_{s (i)} \\ v_{c (i)} \ev\T \bv v_{s (i)} \\ v_{c (i)} \ev = v_{s (i)}^2 + v_{c (i)}^2. \qedhere \end{align*} \end{proof} \begin{property}\label{prop:periodic:dynamics} Given the system $x_{k+1}=Ax_k+Bu_k$, suppose that \begin{align*} & u_{N+\ell} = \ue + u_s \sin(w \ell) + u_c \cos(w \ell), \; \forall \ell \geq 0 \\ & x_N = \xe + x_c \\ & \xe = A \xe + B \ue \\ & x_s \cos(w) - x_c \sin(w) = A x_s + B u_s, \\ & x_s \sin(w) + x_c \cos(w) = A x_c + B u_c. \end{align*} Then $$x_{N+\ell} = \xe + x_s \sin(w\ell) + x_c \cos(w\ell), \; \forall \ell \geq 0.$$ \end{property} \begin{proof} Since $x_N = \xe + x_c$, the claim is trivially satisfied for $\ell=0$. Suppose now that the claim is satisfied for $\ell \geq 0$, we will show that it is also satisfied for $\ell +1$. Indeed, \begin{align*} x_{N+\ell+1} &= A x_{N+\ell} + Bu_{N+\ell} \\ &= A \left[ \xe + x_s \sin(w\ell) + x_c \cos(w\ell) \right] \\ &\quad + B \left[ \ue + u_s \sin(w\ell) + u_c \cos(w\ell) \right] \\ &= A \xe + B \ue + (A x_s + B u_s) \sin(w\ell) \\ &\quad + (A x_c + B u_c) \cos(w\ell) \\ &= \xe + \left[ x_s \cos(w) - x_c \sin(w) \right] \sin(w\ell) \\ &\quad + \left[ x_s \sin(w) + x_c \cos(w) \right] \cos(w\ell) \\ &\numeq{*} \xe + x_s \sin(w(\ell+1)) + x_c \cos(w(\ell+1)). \end{align*} We note that equality $(*)$ is due to Property \ref{prop:simple:armonics}. \end{proof} \begin{property} \label{prop:bounds} Let the elements $v_\ell \inR{n_v}$ of a sequence $\{v\}$ be given by $$v_\ell = v_e + v_s \sin(w\ell) + v_c \cos(w\ell), \; \forall \ell \in \N,$$ where $w\in\R$ and $v_e, v_s, v_c\inR{n_v}$. Then, for every $\ell \in\N$ and $i\in\N_1^{n_v}$ and we have \begin{subequations} \begin{align} v_{\ell (i)} &\leq v_{e (i)} + \sqrt{ v_{s (i)}^2 + v_{c (i)}^2}, \label{eq:bounds:upper} \\ v_{\ell (i)} &\geq v_{e (i)} - \sqrt{ v_{s (i)}^2 + v_{c (i)}^2}. \label{eq:bounds:lower} \end{align} \end{subequations} \end{property} \begin{proof} We prove inequality \eqref{eq:bounds:upper}. The proof for \eqref{eq:bounds:lower} is similar. \begin{align*} v_{\ell (i)} & = v_{e (i)} + v_{s (i)} \sin(w\ell) + v_{c (i)} \cos(w\ell) \\ & = v_{e (i)} + \bmat{cc} v_{s (i)} & v_{c (i)} \emat \bv \sin(w\ell) \\ \cos(w\ell) \ev\\ & \leq v_{e (i)} + \left\| \bv v_{s (i)} \\ v_{c (i)} \ev \right\| \; \left\| \bv \sin(w\ell) \\ \cos(w\ell) \ev\right\| \\ & = v_{e (i)} + \sqrt{v_{s (i)}^2 + v_{c (i)}^2}. \qedhere \end{align*} \end{proof} \subsection{Proof of the recursive feasibility of HMPC} \label{app:proof:recursive:feasibility} \begin{proof}[Proof of Theorem \ref{theo:Recursive:Feasibility}] We begin by proving the first claim. Since $u_j=\bar{u}_j$ for $j\in\N_0^{N-1}$ and $x_0 = x$, we obtain by a direct inspection of \eqref{eq:HMPC:cond:inic} and \eqref{eq:HMPC:dynamics} that \begin{equation} \label{eq:x:bar:x} x_j =\bar{x}_j , \; j\in\N_0^N. \end{equation} This implies $$ C x_j + D u_j = C \bar{x}_j + D \bar{u}_j, \; j\in\N_0^{N-1}.$$ Therefore, we have from inequality \eqref{ineq:HMPC:z:first} that \begin{equation}\label{ineq:z:hasta:N:minus} z_m \leq C x_j + D u_j \leq z_M, \; j\in\N_0^{N-1}. \end{equation} We now prove that these inequalities also hold for $j \geq N$. From \eqref{eq:def:u:j} we have that $$ u_{N+\ell} = \ue + u_s \sin(w\ell) + u_c \cos(w\ell), \; \ell \geq 0.$$ From \eqref{eq:x:bar:x} and \eqref{eq:HMPC:xN} we also have that $x_N = \bar{x}_N = \xe + x_c$. Taking also into consideration equalities \eqref{eq:HMPC:xe} to \eqref{eq:HMPC:xc} we obtain \begin{align*} & u_{N+\ell} = \ue + u_s \sin(w\ell) + u_c \cos(w\ell), \; \ell \geq 0 \\ & x_N = \xe + x_c \\ & \xe =A \xe + B \ue \\ & x_s \cos(w) - x_c \sin(w) = A x_s + B u_s, \\ & x_s \sin(w) + x_c \cos(w) = A x_c + B u_c. \end{align*} which along with Property \ref{prop:periodic:dynamics}, allows us to write $$x_{N+\ell} = \xe + x_s \sin(w\ell) + x_c \cos(w\ell), \; \forall \ell \geq 0.$$ Therefore, we have that \begin{align*} z_{N+\ell} &= Cx_{N+\ell} + Du_{N+\ell} \\ & = C \xe + D \ue + ( C x_s + D u_s )\sin(w\ell) \\ &\quad + ( C x_c + D u_c )\cos(w\ell)\\ & = z_e + z_s \sin(w\ell) + z_c \cos(w\ell), \end{align*} where the last equality is simply due to the definitions of $z_e$, $z_s$ and $z_c$ \eqref{eq:def:z}. From this expression of $z_{N+\ell}$ and Property \ref{prop:bounds} we deduce that for every $\ell \geq 0$ and $i\in\N_1^{n_z}$, \begin{subequations} \begin{align*} \label{ineq:z:in:abstract} z_{N+\ell, (i)} &\leq z_{e (i)} + \sqrt{ z_{s (i)}^2 + z_{c (i)}^2}, \\ z_{N+\ell, (i)} &\geq z_{e (i)} - \sqrt{ z_{s (i)}^2 + z_{c (i)}^2}. \end{align*} \end{subequations} From this, alongside inequalities \eqref{ineq:HMPC:z:minus} and \eqref{ineq:HMPC:z:plus}, we obtain that $$ \hat{z}_{m (i)} \leq z_{N+\ell,(i)} \leq \hat{z}_{M (i)}, \; \forall i\in\N_1^{n_z}, \; \forall \ell\geq 0.$$ Since by construction ${z_{m (i)} \leq \hat{z}_{m (i)}}$ and ${\hat{z}_{M (i)} \leq z_{M (i)}}$ (See Remark \ref{rem:Admissible}), we have that, \begin{equation}\label{ineq:z:bounds} z_m \leq C x_{N+\ell} + D u_{N+\ell} \leq z_M, \; \forall \ell\geq 0. \end{equation} Which along with (\ref{ineq:z:hasta:N:minus}), proves (\ref{ineq:z:totales}). We now prove the second claim, i.e. $A x + B \bar{u}_0$ belongs to the feasibility region of $\lH(A x + B \bar{u}_0; x_r, u_r)$. To do so, we show that \begin{subequations} \label{eq:Shift:all} \begin{align} & \bar{u}_j^+ \doteq \bar{u}_{j+1}, \; j\in\N_0^{N-2} \label{eq:Shift:u:shift} \\ & \bar{u}_{N-1}^+ \doteq \ue + u_c \label{eq:Shift:uN} \\ & \bar{x}_0^+ \doteq A x +B \bar{u}_0 \label{eq:Shift:x:inic} \\ & \bar{x}_{j+1}^+ \doteq A \bar{x}_j^+ +B \bar{u}_j^+, \;j\in\N_0^{N-1} \label{eq:Shift:x:shift} \\ & \ue^+ \doteq \ue \label{eq:Shift:ue} \\ & u_s^+ \doteq u_s \cos(w) - u_c \sin(w) \\ & u_c^+ \doteq u_s \sin(w) + u_c \cos(w) \\ & \bmat{ccc} \xe^+ & x_s^+ & x_c^+ \emat \doteq \bmat{cc} A & B \emat \bmat{ccc} \xe & x_s & x_c \\ \ue & u_s & u_c \emat \label{eq:Shift:x_h} \end{align} \end{subequations} is a feasible solution for the initial condition $A x + B \bar{u}_0$ by showing that \eqref{eq:Shift:all} satisfies constraints \eqref{eq:HMPC:dynamics} to \eqref{ineq:HMPC:z:plus}. That is, we prove in what follows that \begin{subequations} \begin{align} & \bar{x}_{j+1}^+ = A \bar{x}_j^+ + B\bar{u}_j^+,\; j\in\N_0^{N-1} \label{eq:Feas:dynamics}\\ & z_m \leq C \bar{x}_j^+ + D \bar{u}_j^+ \leq z_M,\; j\in\N_0^{N-1} \label{ineq:Feas:z:first}\\ & \bar{x}^+_0 = A x + B \bar{u}_0 \label{eq:Feas:cond:inic}\\ & \bar{x}_N^+ = \xe^+ + x_c^+ \label{eq:Feas:xN}\\ & \xe^+ = A \xe^+ + B \ue^+ \label{eq:Feas:xe} \\ & x_s^+ \cos(w) - x_c^+ \sin(w) = A x_s^+ + B u_s^+ \label{eq:Feas:xs}\\ & x_s^+ \sin(w) + x_c^+ \cos(w) = A x_c^+ + B u_c^+ \label{eq:Feas:xc}\\ & \sqrt{ (z_{s (i)}^+)^2 + (z_{c (i)}^+)^2 } \leq z_{e (i)}^+ - \hat{z}_{m (i)},\; i\in\N_1^{n_z} \label{ineq:Feas:z:minus}\\ & \sqrt{ (z_{s (i)}^+)^2 + (z_{c (i)}^+)^2 } \leq \hat{z}_{M (i)} - z_{e (i)}^+,\; i\in\N_1^{n_z}, \label{ineq:Feas:z:plus} \end{align} \end{subequations} where variables $z_e^+$, $z_s^+$ and $z_c^+$ are given by \begin{equation*} \bmat{ccc} z_e^+ & z_s^+ & z_c^+ \emat \doteq \bmat{cc} C & D \emat \bmat{ccc} \xe^+ & x_s^+ & x_c^+ \\ \ue^+ & u_s^+ & u_c^+ \emat. \end{equation*} Equalities \eqref{eq:Feas:dynamics} and \eqref{eq:Feas:cond:inic} are trivially satisfied by construction (see \eqref{eq:Shift:x:inic}-\eqref{eq:Shift:x:shift}). Since $\bar{x}_{0}^+ = A x +B \bar{u}_0 = \bar{x}_1$, and $\bar{u}_j^+ = \bar{u}_{j+1}$, $j\in\N_0^{N-2}$ (see (\ref{eq:Shift:u:shift})), we have \begin{equation} \label{eq:equivalence:step:bxj_buj} \bv \bar{x}^+_j \\ \bar{u}_j^+ \ev = \bv \bar{x}_{j+1} \\ \bar{u}_{j+1} \ev, \;j\in\N_0^{N-2}. \end{equation} Therefore, from \eqref{ineq:HMPC:z:first} we obtain \begin{equation}\label{ineq:z:solo:Nminusdos} z_m \leq C \bar{x}_j^+ + D \bar{u}_j^+ \leq z_M, \;j\in\N_0^{N-2}. \end{equation} We now compute the value of $\bar{x}^+_{N-1}$. \begin{align} \label{eq:equivalence:bxN} \bar{x}_{N-1}^+ &= A \bar{x}_{N-2}^+ + B \bar{u}_{N-2}^+ \nonumber \\ &= A \bar{x}_{N-1} + B \bar{u}_{N-1} = \bar{x}_N = \xe + x_c. \end{align} Since $\bar{u}_{N-1}^+ = \ue + u_c$ we obtain $$ z_{N-1}^+ = C (\xe + x_c) + D (\ue + u_c) = z_e + z_c.$$ Defining $$z_{N+\ell} = z_e + z_s \sin(w\ell) + z_c \cos(w\ell), \; \forall \ell \in \N$$ we have $z_{N-1}^+ = z_{N}$. This, along with \eqref{ineq:z:bounds}, yields $$ z_m \leq C\bar{x}_{N-1}^++D\bar{u}_{N-1}^+ \leq z_M.$$ From this and \eqref{ineq:z:solo:Nminusdos}, we conclude $$ z_m \leq C\bar{x}_j^++D\bar{u}_j^+ \leq z_M, \;j\in\N_0^{N-1},$$ which proves \eqref{ineq:Feas:z:first}. The value of $\bar{x}_N^+$ can be computed from $\bar{x}_{N-1}^+ = \bar{x}_N$ and $\bar{u}_{N-1}^+ = \ue + u_c$ as follows. \begin{align*} \bar{x}_N^+ &= A \bar{x}_{N-1}^+ + B \bar{u}_{N-1}^+ = A \bar{x}_N + B(\ue + u_c) \\ &= A(\xe + x_c)+ B(\ue + u_c) = \xe^+ + x_c^+, \end{align*} which proves (\ref{eq:Feas:xN}). From \begin{equation*} \xe^+ = A \xe + B \ue = \xe, \end{equation*} and equality $\ue^+ = \ue$ (see \eqref{eq:Shift:ue}) we obtain from \eqref{eq:Shift:x_h} $$\xe^+ = A \xe + B \ue = A \xe^+ + B \ue^+,$$ which proves \eqref{eq:Feas:xe}. We now prove \eqref{eq:Feas:xs}. \small \begin{align*} A x_s^+ + Bu_s^+ &= A( A x_s + Bu_s) + B u_s^+ \\ &= A(x_s \cos(w) - x_c \sin(w)) \\ &+ B(u_s \cos(w) - u_c \sin(w)) \\ &= (A x_s + B u_s) \cos(w) - (A x_c + B u_c) \sin(w) \\ &= x_s^+ \cos(w) - x_c^+ \sin(w). \end{align*} \normalsize We prove \eqref{eq:Feas:xc} in a similar way. \small \begin{align*} A x_c^+ + B u_c^+ &= A( A x_c + Bu_c) + B u_c^+ \\ &= A(x_s \sin(w) + x_c \cos(w)) \\ &+ B(u_s \sin(w) + u_c \cos(w)) \\ &= (A x_s + B u_s) \sin(w) + (A x_c + B u_c) \cos(w) \\ &= x_s^+ \sin(w) + x_c^+ \cos(w). \end{align*} \normalsize Next, we express $z_e^+$, $z_s^+$ and $z_c^+$ in terms of $z_e$, $z_s$, $z_c$. \small \begin{align*} z_e^+ &= C \xe^+ + D \ue^+ = C \xe + D\ue = z_e.\\ z_s^+ &= C x_s^+ + D u_s^+ = C( A x_s + B u_s ) + D u_s^+ \\ &= C( x_s \cos(w) - x_c \sin(w)) + D( u_s \cos(w) - u_c \sin(w)) \\ &= (C x_s + D u_s) \cos(w) - (C x_c + D u_c) \sin(w)\\ &= z_s \cos(w) - z_c \sin(w).\\ z_c^+ &= C x_c^+ + D u_c^+ = C( A x_c + B u_c ) + D u_c^+ \\ &= C( x_s \sin(w) + x_c \cos(w)) + D( u_s \sin(w) + u_c \cos(w)) \\ &= (C x_s + D u_s) \sin(w) + (C x_c + D u_c) \cos(w)\\ &= z_s \sin(w) + z_c \cos(w). \end{align*} \normalsize Therefore, for every $i\in\N_1^{n_z}$ we have \begin{align*} &z_{e (i)}^+ = z_{e (i)} \\ &z_{s (i)}^+ = z_{s (i)} \cos(w) - z_{c (i)} \sin(w) \\ &z_{c (i)}^+ = z_{s (i)} \sin(w) + z_{c (i)} \cos(w). \end{align*} In view of Property \ref{prop:simple:armonics} this leads to $$ \sqrt{(z_{s (i)}^+)^2 + (z_{c (i)}^+)^2} = \sqrt{z_{s (i)}^2 + z_{c (i)}^2}, \; i\in\N_1^{n_z}. $$ From this we conclude that inequalities \eqref{ineq:Feas:z:minus} and \eqref{ineq:Feas:z:plus} are directly inferred from \eqref{ineq:HMPC:z:minus} and \eqref{ineq:HMPC:z:plus}. \end{proof} \subsection{Proof of the asymptotic stability of the HMPC controller} \label{app:proof:stability:HMPC} The proof of Theorem \ref{theo:HMPC:Stability} relies on the following lemma. An important conclusion that can be drawn from it is that if the current state of the system $x_k$ satisfies $x_k = x_e^\circ$, then the optimal solution of $\lH(x_k; x_r, u_r)$ satisfies ${x_{h0}^* = x_e^\circ = x_k}$. The proofs of Theorem \ref{theo:HMPC:Stability} and the following lemma are heavily influenced by the proofs of Theorem 1 and Lemma 1 from \cite{Limon_TAC_2018}, respectively. \begin{lemma} \label{lemma:stability:x:is:optimal} Consider a system \eqref{eq:Model} subject to \eqref{eq:Constraints} and a reference $(x_r, u_r)$. Let $x$ be a state such that the optimal solution of $\lH(x; x_r, u_r)$ satisfies ${x_{h0}^*=x}$. Then, ${\lH^*(x; x_r, u_r) = \offsetCostH(\vv{x}_H^\circ, \vv{u}_H^\circ; x_r, u_r)}$. \end{lemma} \begin{proof} Due to space considerations, we will drop the dependency w.r.t. $(x_r, u_r)$ from the notation of the functions. Let ${\offsetCostH^* \doteq \offsetCostH(\xH^*, \uH^*)}$ and ${\offsetCostH^\circ \doteq \offsetCostH(\vv{x}_H^\circ, \vv{u}_H^\circ)}$. It can be shown that $\lH^*(x; x_r, u_r) = \offsetCostH(\xH^*, \uH^*)$ if $x_{h0}^* = x$, i.e. that the optimal solution of $\lH(x; x_r, u_r)$ is given by \begin{equation} \label{eq:lemma:stability:x:is:optimal:solution} x_j^* = x_{hj}} % Notation for each element of the sequence x_{h,j^*, \quad u_j^* = u_{hj}} % Notation for each element of the sequence u_{h,j^*, \quad \forall j\in\N_0^{N-1}. \end{equation} Indeed, the stage cost of \eqref{eq:lemma:stability:x:is:optimal:solution} is $\stageCostH(\vv{x}^*, \vv{u}^*, \xH^*, \uH^*) = 0$, which is its smallest possible value for all solutions in which $x_{h0}^* = x$. Additionally, it can be shown that \eqref{eq:lemma:stability:x:is:optimal:solution} is a feasible solution of \eqref{eq:HMPC:dynamics}-\eqref{ineq:HMPC:z:plus}. Next, we prove that $\offsetCostH^* = \offsetCostH^\circ$ by contradiction. Assume that $\offsetCostH^* > \offsetCostH^\circ$. Since $(\vv{x}_H^\circ, \vv{u}_H^\circ)$ is the unique minimizer of $\offsetCostH(\cdot)$ for all $(\xH, \uH)$ that satisfy \eqref{eq:HMPC:xe}-\eqref{ineq:HMPC:z:plus}, this implies that $(\xH^*, \uH^*) \neq (\vv{x}_H^\circ, \vv{u}_H^\circ)$. Let $\hat{\vv{x}}_H$ be defined as \begin{align*} \hat{\vv{x}}_H &= (\hat{x}_e, \hat{x}_s, \hat{x}_c) = \lambda \xH^* + (1 - \lambda) \xH^\circ \\ &= \lambda (\xe^*, x_s^*, x_c^*) + (1-\lambda) (x_e^\circ, 0, 0), \; \lambda \in [0, 1], \end{align*} and $\hat{\vv{u}}_H$ similarly. From the convexity of $\cc{Z}$ and the fact that $(x_{hj}} % Notation for each element of the sequence x_{h,j^*, u_{hj}} % Notation for each element of the sequence u_{h,j^*) \in \ri{\cc{Z}}$ for all $j\in\N$ (as can be deduced from Property \ref{prop:bounds} and the fact that $(x_{hj}} % Notation for each element of the sequence x_{h,j^*, u_{hj}} % Notation for each element of the sequence u_{h,j^*)$ satisfies \eqref{ineq:HMPC:z:minus} and \eqref{ineq:HMPC:z:plus}), there exists a $\hat{\lambda} \in [0, 1)$ such that for any $\lambda \in [\hat{\lambda}, 1]$ there is a dead-beat control law $\vv{u}^\text{db}$ for which the predicted trajectory $\vv{x}^\text{db}$ satisfying $x^\text{db}_0 = x_{h0}^*$ and $x^\text{db}_N = \hat{x}_{h0}$ is a feasible solution $(\vv{x}^\text{db}, \vv{u}^\text{db}, \hat{\vv{x}}_H, \hat{\vv{u}}_H)$ of problem $\lH(x_{h0}^*; x_r, u_r)$. Then, taking into account the optimality of \eqref{eq:lemma:stability:x:is:optimal:solution}, and noting that there exists a matrix $P\in\Sp{n}$ such that $$\Sum{j=0}{N-1} \| x_j^\text{db} - x_e^\circ \|^2_Q + \| u_j^\text{db} - u_e^\circ \|^2_R \leq \|x_0^\text{db} - x_e^\circ \|^2_P,$$ we have that \begin{align} \label{eq:lemma:stability:offsetCostH} \offsetCostH^* &= H(\vv{x}^*, \vv{u}^*, \xH^*, \uH^*) \leq H(\vv{x}^\text{db}, \vv{u}^\text{db}, \hat{\vv{x}}_H, \hat{\vv{u}}_H) \nonumber \\ &= \stageCostH(\vv{x}^\text{db}, \vv{u}^\text{db}, \hat{\vv{x}}_H, \hat{\vv{u}}_H) + \offsetCostH(\hat{\vv{x}}_H, \hat{\vv{u}}_H) \nonumber \\ &\leq \| x_{h0}^* - \hat{x}_{h0} \|^2_P + \offsetCostH(\hat{\vv{x}}_H, \hat{\vv{u}}_H) \nonumber \\ &\numeq{*} (1 - \lambda)^2 \| x_{h0}^* - x_e^\circ \|^2_P + \offsetCostH(\hat{\vv{x}}_H, \hat{\vv{u}}_H), \end{align} where step $(*)$ is using \begin{align*} x_{h0}^* - \hat{x}_{h0} &= x_{h0}^* -\left[ \lambdax_{h0}^* + (1-\lambda) x_{h0}^\circ \right] \\ &= (1-\lambda) (x_{h0}^* - x_{h0}^\circ) = (1-\lambda) (x_{h0}^* - x_e^\circ). \end{align*} From the convexity of $\offsetCostH(\cdot)$ we have that \begin{equation*} \offsetCostH(\hat{\vv{x}}_H, \hat{\vv{u}}_H) \leq \lambda \offsetCostH^* + (1-\lambda) \offsetCostH^\circ, \; \lambda \in [0, 1], \end{equation*} which combined with \eqref{eq:lemma:stability:offsetCostH} leads to, \begin{equation} \label{eq:lemma:stability:x:is:optimal:upper:bound} \offsetCostH^* \leq \Gamma(\lambda), \, \lambda \in [\hat\lambda, 1], \end{equation} where \begin{equation*} \Gamma(\lambda) \doteq (1 - \lambda)^2 \| x_{h0}^* - x_e^\circ \|^2_P + \lambda (\offsetCostH^* - \offsetCostH^\circ) + \offsetCostH^\circ. \end{equation*} The derivative of $\Gamma(\lambda)$ (w.r.t. $\lambda$) is $$\nabla \Gamma(\lambda) = -2 (1 - \lambda) \| x_{h0}^* - x_e^\circ ||^2_P + (\offsetCostH^* - \offsetCostH^\circ).$$ Taking into account the initial assumption $\offsetCostH^* - \offsetCostH^\circ > 0$, we have that $\nabla \Gamma(1) > 0$. Therefore, there exists a $\lambda \in [\hat\lambda, 1)$ such that $\Gamma(\lambda) < \Gamma(1) = \offsetCostH^*$, which together with \eqref{eq:lemma:stability:x:is:optimal:upper:bound} leads to the contradiction $\offsetCostH^* < \offsetCostH^*$. Therefore, we have that $\offsetCostH(\xH^*, \uH^*) \leq \offsetCostH(\vv{x}_H^\circ, \vv{u}_H^\circ)$. Moreover, since $(\vv{x}_H^\circ, \vv{u}_H^\circ)$ is the unique minimizer of $\offsetCostH(\xH, \uH)$ for all $(\xH, \uH)$ that satisfy \eqref{eq:HMPC:xe}-\eqref{ineq:HMPC:z:plus}, we conclude that $\offsetCostH(\xH^*, \uH^*) = \offsetCostH(\vv{x}_H^\circ, \vv{u}_H^\circ)$. \qedhere \end{proof} \begin{proof}[Proof of Theorem \ref{theo:HMPC:Stability}] The proof is divided in two parts. First, we show that $(x_e^\circ, u_e^\circ)$ is a stable equilibrium point of the closed-loop system by deriving a suitable Lyapunov function, and next, we show that it is attractive. Let us consider a state $x$ belonging to the domain of attraction of the HMPC controller and a reference $(x_r, u_r)$. Let $\vv{x}^*$, $\vv{u}^*$, $\xH^*$, $\uH^*$ be the optimal solution of $\lH(x; x_r, u_r)$ and $\lH^*(x; x_r, u_r) \doteq \costFunH(\vv{x}^*, \vv{u}^*, \xH^*, \uH^*)$ be its optimal value. Additionally, let $\offsetCostH^*(x_r, u_r) \doteq \offsetCostH(\xH^*, \uH^*; x_r, u_r)$. We will now show that the function $$W(x; x_r, u_r) = \lH^*(x; x_r, u_r) - \offsetCostH^\circ(x_r, u_r)$$ is a Lyapunov function for $x - x_e^\circ$ by finding suitable $\alpha_1(\|x - x_e^\circ\|)$ and $\alpha_2(\|x - x_e^\circ\|)$ $\cc{K}_\infty$-class functions such that the conditions of Theorem \ref{theo:Lyapunov:Stability} are satisfied. Due to space considerations, in this proof we will drop the dependency w.r.t. $(x_r, u_r)$ from the notation of the functions. Let $x^+ \doteq A x + B \bar{u}_0^*$ be the successor state and consider the shifted sequence $\vv{x}^+$, $\vv{u}^+$, $\xH^+$, $\uH^+$ be defined as in \eqref{eq:Shift:all} but taking $\vv{x}^*$, $\vv{u}^*$, $\xH^*$, $\uH^*$ in the right-hand-side of the equations. It is clear from the proof of Theorem \ref{theo:Recursive:Feasibility} that this shifted sequence is a feasible solution of $\lH(x^+; x_r, u_r)$. The satisfaction of condition \textit{(i)} of Theorem \ref{theo:Lyapunov:Stability} follows from \begin{align} \label{eq:stability:proof:condition:i} W(x) &= \Sum{j=0}{N-1} \| x_j^* - x_{hj}} % Notation for each element of the sequence x_{h,j^* \|_Q^2 + \| u_j^* - u_{hj}} % Notation for each element of the sequence u_{h,j^* \|_R^2 + \offsetCostH^* - \offsetCostH^\circ \nonumber \\ &\numeq[\geq]{A1} \|{x_0^*}{-}{x_{h0}^*} \|_Q^2 + \frac{\hat\sigma}{2} \|x_{h0}^* - x_e^\circ \|^2 \nonumber \\ & \geq \min\{ \lambda_\text{min} (Q), \frac{\hat\sigma}{2}\} \left( \|x - x_{h0}^*\|^2 + \|x_{h0}^* - x_e^\circ \|^2 \right) \nonumber \\ &\numeq[\geq]{A2} \frac{1}{2} \min\{ \lambda_\text{min} (Q), \frac{\hat\sigma}{2} \} \| x - x_e^\circ \|^2, \end{align} where $(A2)$ is due to the parallelogram law, which states that for any two vectors $v_1, v_2 \inR{n_v}$, $$\|v_1\|^2 + \|v_2\|^2 = \frac{1}{2}\|v_1 + v_2\|^2 + \frac{1}{2}\|v_1 - v_2\|^2,$$ and $(A1)$ follows from the fact that \begin{equation} \label{eq:strong:convexity:condition:offsetCostH} \offsetCostH^* - \offsetCostH^\circ \geq \frac{\hat\sigma}{2} ( \|x_{h0}^* - x_e^\circ\|^2) \end{equation} for some $\hat\sigma > 0$. To show this, note that $\offsetCostH(\cdot)$ is a strongly convex function. Therefore, it satisfies for some $\sigma > 0$ \cite[Theorem 5.24]{Beck_SIAM_17}, \cite[\S 9.1.2]{Boyd_ConvexOptimization}, \begin{equation*} \offsetCostH(z) - \offsetCostH(y) \geq \sp{\partial \offsetCostH(y)}{z - y} + \frac{\sigma}{2} \| z - y \|^2, \end{equation*} for all $z, y \in \R^n \times \R^n \times \R^n \times \R^m \times \R^m \times \R^m$. Particularizing for $z = (\xH^*, \uH^*)$ and $y = (\vv{x}_H^\circ, \vv{u}_H^\circ)$ we have that, \begin{align*} \offsetCostH^* - \offsetCostH^\circ &\geq \sp{\partial \offsetCostH^\circ}{(\xH^*, \uH^*) - (\vv{x}_H^\circ, \vv{u}_H^\circ)} \\ &\quad+ \frac{\sigma}{2} \| (\xH^*, \uH^*) - (\vv{x}_H^\circ, \vv{u}_H^\circ) \|^2. \end{align*} From the optimality of $(\vv{x}_H^\circ, \vv{u}_H^\circ)$ we have that \cite[Proposition 5.4.7]{Bertsekas_Convex_2009}, \cite[\S 4.2.3]{Boyd_ConvexOptimization}, \begin{align*} \sp{\partial \offsetCostH^\circ}{(\xH, \uH) - (\vv{x}_H^\circ, \vv{u}_H^\circ)} \geq 0 \end{align*} for all $(\xH, \uH)$ satisfying \eqref{eq:HMPC:xe}-\eqref{ineq:HMPC:z:plus}. Since $(\xH^*, \uH^*)$ satisfies \eqref{eq:HMPC:xe}-\eqref{ineq:HMPC:z:plus}, this leads to \begin{align*} &\offsetCostH^* - \offsetCostH^\circ \geq \frac{\sigma}{2} \| (\xH^*, \uH^*) - (\vv{x}_H^\circ, \vv{u}_H^\circ) \|^2 \\ &= \frac{\sigma}{2} \left( \| \xH^* - \vv{x}_H^\circ \|^2 + \| \uH^* - \vv{u}_H^\circ \|^2 \right) \\ &\geq \frac{\sigma}{2} ( \|{\xe^*} - {x_e^\circ} \|^2 + \| x_s^* \|^2 + \| x_c^* \|^2 ) \\ &\geq \frac{\sigma}{2} ( \|{\xe^*} - {x_e^\circ} \|^2 + \| x_s^* \sin(- w N) \|^2 {+} \| x_c^* \cos(- w N)\|^2 ), \end{align*} where we are making use of the definition of the 2-norm of the Cartesian product of standard Euclidean spaces, i.e. given vectors $v_i \in\R^{n_v}$ for $i \in\N_1^M$, \begin{equation*} \| (v_1, v_2, ..., v_M) \|^2 = \Sum{i=1}{M} \| v_i \|^2. \end{equation*} Finally, inequality \eqref{eq:strong:convexity:condition:offsetCostH} then follows from the fact that there exists a scalar $\hat\sigma > 0$ such that \begin{align*} &\frac{\sigma}{2} ( \|{\xe^*} - {x_e^\circ} \|^2 + \| x_s^* \sin(- w N) \|^2 + \| x_c^* \cos(- w N)\|^2 ) \\ &\geq \frac{\hat\sigma}{2} \|\xe^* - x_e^\circ + x_s^* \sin(- w N) + x_c^* \cos(- w N) \|^2 \\ &= \frac{\hat\sigma}{2} \| x_{h0}^* - x_e^\circ \|^2. \end{align*} Since $(x_e^\circ, u_e^\circ) \in \ri{\cc{Z}}$ (see Lemma \ref{lemma:optimal:artificial:reference:HMPC}), the system is controllable and $N$ is greater than its controllability index, there exists a sufficiently small compact set containing the origin in its interior $\Omega$ such that, for all states $x$ that satisfy $x - x_e^\circ \in \Omega$, the dead-beat control law \begin{equation*} \label{eq:dead:beat:control:law} u_j^\text{db} = K_\text{db} (x_j^\text{db} - x_e^\circ) + u_e^\circ \end{equation*} provides an admissible predicted trajectory $\vv{x}^\text{db}$ of system \eqref{eq:Model} subject to \eqref{eq:Constraints}, where $x^\text{db}_{j+1} = A x^\text{db}_j + B u^\text{db}_j,\; j \in\N_0^{N-1}$, ${x^\text{db}_0 = x}$ and ${x^\text{db}_N = x_e^\circ}$. Then, taking into account the optimality of $\vv{x}^*$, $\vv{u}^*$, $\xH^*$, $\uH^*$, we have that, \begin{align*} W(x) &= \stageCostH(\vv{x}^*, \vv{u}^*, \xH^*, \uH^*) + \offsetCostH(\xH^*, \uH^*) - \offsetCostH^\circ \\ &\leq \stageCostH(\vv{x}^\text{db}, \vv{u}^\text{db}, \vv{x}_H^\circ, \vv{u}_H^\circ) + \offsetCostH(\vv{x}_H^\circ, \vv{u}_H^\circ) - \offsetCostH^\circ \\ &\leq \Sum{j=0}{N-1} \| x_j^\text{db} - x_e^\circ \|^2_Q + \| u_j^\text{db} - u_e^\circ \|^2_R. \end{align*} Therefore, there exists a matrix $P\in\Sp{n}$ such that \begin{align*} W(x) \leq \lambda_\text{max}(P) \|x - x_e^\circ\|^2 \end{align*} for any $x - x_e^\circ \in\Omega$. This shows the satisfaction of condition \textit{(ii)} of Theorem \ref{theo:Lyapunov:Stability}. Next, let $\Delta W(x) \doteq W(x^+) - W(x)$ and note that, as shown by \eqref{eq:Shift:uN}, \eqref{eq:Shift:x_h}, \eqref{eq:equivalence:step:bxj_buj}, \eqref{eq:equivalence:bxN} and Property \ref{prop:simple:armonics}, we have that ${x_j^+ = x_{j+1}^*}$ for $j\in\N_0^{N-1}$, $u_j^+ = u_{j+1}^*$ for $j\in\N_0^{N-1}$, and that $x_{hj}} % Notation for each element of the sequence x_{h,j^+ = x_{h,j+1}^*$ and $u_{hj}} % Notation for each element of the sequence u_{h,j^+ = u_{h,j+1}^*$ for $j\in\N$. Then, condition \textit{(iii)} of Theorem \ref{theo:Lyapunov:Stability} follows from \begin{align*} \Delta W(x) &= \lH^*(x^+) - \offsetCostH^\circ - \lH^*(x) + \offsetCostH^\circ \\ &\leq \lH(x^+) - \lH^*(x) \\ &= \Sum{j=0}{N-1} \| x_j^+ - x_{hj}} % Notation for each element of the sequence x_{h,j^+ \|_Q^2 + \| u_j^+ - u_{hj}} % Notation for each element of the sequence u_{h,j^+ \|_R^2\\ & + \Sum{j=0}{N-1} - \| x_j^* - x_{hj}} % Notation for each element of the sequence x_{h,j^* \|_Q^2 - \| u_j^* - u_{hj}} % Notation for each element of the sequence u_{h,j^* \|_R^2 \\ & + \offsetCostH(\xH^+, \uH^+) - \offsetCostH(\xH^*, \uH^*) \\ & \numeq{*} \Sum{j=1}{N-1} \|x_j^* - x_{hj}} % Notation for each element of the sequence x_{h,j^*\|_Q^2 + \|u_j^* - u_{hj}} % Notation for each element of the sequence u_{h,j^*\|_R^2 \\ & + \Sum{j=0}{N-1} - \|x_j^* - x_{hj}} % Notation for each element of the sequence x_{h,j^*\|_Q^2 - \|u_j^* - u_{hj}} % Notation for each element of the sequence u_{h,j^*\|_R^2 \\ & + \| x_N^* - x_{hN}^* \|_Q^2 + \|u_N^* - u_{hN}^*\|_R^2 \\ & = -\| x_0^* - x_{h0}^* \|_Q^2 - \| u_0^* - u_{h0}^* \|_R^2 \\ & \leq - \lambda_\text{min}(Q) \| x - x_{h0}^* \|^2, \end{align*} where in step $(*)$ we are making use of the fact that, $$\offsetCostH(\xH^+, \uH^+) = \offsetCostH(\xH^*, \uH^*).$$ Indeed, note that $\xe^+ = \xe^*$ and $\ue^+ = \ue^*$. Therefore, the first two terms of $\offsetCostH(\xH^+, \uH^+)$ \eqref{eq:HMPC:Offset:Cost} are the same as those of $\offsetCostH(\xH^*, \uH^*)$. We now show that, since $T_h$ and $S_h$ are diagonal matrices, the terms $\| x_s \|_{T_h}^2 + \| x_c \|_{T_h}^2$ are also the same (terms $\| u_s \|_{S_h}^2 + \| u_c \|_{S_h}^2$ follow similarly). \begin{align*} \| x_s^+ \|_{T_h}^2 + \| x_c^+ \|_{T_h}^2 &= \| x_s^* \cos(w) - x_c^* \sin(w) \|_{T_h}^2 \\ &+ \| x_s^* \sin(w) + x_c^* \cos(w) \|_{T_h}^2 \\ &= ( \sin(w)^2 + \cos(w)^2 ) \| x_s^* \|_{T_h}^2 \\ &+ ( \sin(w)^2 + \cos(w)^2 ) \| x_c^* \|_{T_h}^2 \\ &+ 2 \cos(w) \sin(w) \sp{x_s^*}{T_h x_c^*} \\ &- 2 \cos(w) \sin(w) \sp{x_s^*}{T_h x_c^*} \\ &= \| x_s^* \|_{T_h}^2 + \| x_c^* \|_{T_h}^2. \end{align*} We have thus far shown that $x_e^\circ$ is a stable steady state of the closed-loop system. We now prove its asymptotic stability by noting that inequality \begin{equation*} W(x_{k+1}) - W(x_k) \leq - \lambda_\text{min}(Q) \| x_k - x_{h0}^*(x_k) \|^2 \end{equation*} leads to state that \begin{equation} \label{eq:stability:proof:limit:xk} \lim\limits_{k \rightarrow \infty} | x_k - x_{h0}^*(x_k) | = 0. \end{equation} Since $W(x_k) \geq 0$, this implies that \begin{equation*} \lim\limits_{k \rightarrow \infty} W(x_k) = W_\infty. \end{equation*} From Lemma \ref{lemma:stability:x:is:optimal}, we have that if $|x - x_{h0}^*(x)| = 0$, then $W(x) = 0$. Therefore, since \eqref{eq:stability:proof:limit:xk}, we have that \begin{equation*} \lim\limits_{k \rightarrow \infty} W(x_k) = W_\infty = 0. \end{equation*} We now take limits on both sides of inequality \eqref{eq:stability:proof:condition:i} \begin{equation*} \lim\limits_{k \rightarrow \infty} \frac{1}{2} \min\{ \lambda_\text{min}(Q), \frac{\hat\sigma}{2} \} \| x_k - x_e^\circ \|^2 \leq \lim\limits_{k \rightarrow \infty} W(x_k) = 0 \end{equation*} to finally conclude that \[\lim\limits_{k \rightarrow \infty} | x_k - x_e^\circ | = 0. \qedhere \] \end{proof} \end{appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,034
Q: Missing permissions required by GoogleMap.setMyLocationEnabled: android.permission.ACCESS_COARSE_LOCATION or android.permission.ACCESS_FINE_LOCATION Надо получить текущее местоположение устройства. При проверке разрешений выдаёт ошибку "Missing permissions required by GoogleMap.setMyLocationEnabled: android.permission.ACCESS_COARSE_LOCATION or android.permission.ACCESS_FINE_LOCATION". Не пойму, что не так делаю? AndroidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools"> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/> <uses-permission android:name="android.permission.INTERNET"/> <uses-permission android:name="android.permission.ACCESS_BACKGROUND_LOCATION" /> <application android:allowBackup="true" android:dataExtractionRules="@xml/data_extraction_rules" android:fullBackupContent="@xml/backup_rules" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/Theme.TestApp" tools:targetApi="31"> <meta-data android:name="com.google.android.geo.API_KEY" android:value="${MAPS_API_KEY}"/> <meta-data android:name="com.google.android.gms.version" android:value="@integer/google_play_services_version"/> <activity android:name=".MainActivity" android:exported="true" android:label="@string/app_name" android:theme="@style/Theme.TestApp"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> <meta-data android:name="android.app.lib_name" android:value="" /> </activity> </application> </manifest> Функция проверки разрешений: private fun isLocationPermissionGranted(): Boolean { return if (ActivityCompat.checkSelfPermission(this, ACCESS_FINE_LOCATION) != PackageManager.PERMISSION_GRANTED && ActivityCompat.checkSelfPermission(this, ACCESS_COARSE_LOCATION) != PackageManager.PERMISSION_GRANTED) { ActivityCompat.requestPermissions(this, arrayOf(ACCESS_FINE_LOCATION, ACCESS_COARSE_LOCATION), locationPermissionCode) false } else { map.isMyLocationEnabled = true true } } Скриншот: A: а разве координаты получают не через LocationManager? типа locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, minGPSTime, minGPSDistance, locationListener); // слушатель географического положения private LocationListener locationListener = new LocationListener() { // при изменении координат наблюдателя @Override public void onLocationChanged(Location location) { showLocation(location); } // событие при отключении провайдера @Override public void onProviderDisabled(String provider) { checkEnabled(); } // событие при включении провайдера @Override public void onProviderEnabled(String provider) { checkEnabled(); if (ActivityCompat.checkSelfPermission(getApplicationContext(), Manifest.permission.ACCESS_FINE_LOCATION) != PackageManager.PERMISSION_GRANTED && ActivityCompat.checkSelfPermission(getApplicationContext(), Manifest.permission.ACCESS_COARSE_LOCATION) != PackageManager.PERMISSION_GRANTED) { return; } showLocation(locationManager.getLastKnownLocation(provider)); } // событие изменение статуса @Override public void onStatusChanged(String provider, int status, Bundle extras) { if (provider.equals(LocationManager.GPS_PROVIDER)) { mGpsProviderStatus = "Статус: " + String.valueOf(status); } else if (provider.equals(LocationManager.NETWORK_PROVIDER)) { mNetProviderStatus = "Статус: " + String.valueOf(status); } } };
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,326
Q: Take 4:3 photos with AVFoundation in Swift I've made a custom camera for my App and I use AVFoundation with an AVCaptureSession to let the user take photos. Now, the video stream coming from the camera is what seems like 16:9 so they can only take 16:9 photos. I'm not sure how the iPhone's own App handles this (images 4:3, video 16:9 - do they zoom in?) but I want to do the same. Not cutting the image afterwards or so, I want the true 4:3 signal that would show in the camera App. (correct me if Apple does exactly that... :D) Tell me if you need code or so but it's basically the standard AVCaptureSession with an AVCaptureDeviceInput(device: //the camera) added as input. Thank you in advance! A: I believe you should be able to control the image by setting the AVCaptureSession's sessionPreset property. I believe setting this to AVCaptureSessionPresetPhoto should do what you want. But if that doesn't work, by all means do post the code and I'll take a look
{ "redpajama_set_name": "RedPajamaStackExchange" }
265
\section{Introduction} In this note, we study submanifold geometry of the Atiyah--Hitchin manifold, a double cover of the $2$-monopole moduli space, which plays an important role in various settings such as the supersymmetric background of string theory. When the manifold is naturally identified as the total space of a line bundle over $S^2$, the zero section is a distinguished minimal $2$-sphere of considerable interest. In particular, there has been a conjecture \cite[Remark on p.262]{ref_MW} about the uniqueness of this minimal $2$-sphere among all closed minimal $2$-surfaces. We show that this minimal $2$-sphere satisfies the ``strong stability condition" proposed in our earlier work \cite{ref_TsaiW2}, and confirm the global uniqueness as a corollary. \newcommand\ang{{\frac{\psi}{2}}} \section{The Atiyah--Hitchin manifold} We start by reviewing the geometry of the Atiyah--Hitchin manifold which is denoted by $M$ throughout this paper. The underlying manifold\footnote{The Atiyah--Hitchin manifold in literature often refers to a $\mathbb{Z}/2$ quotient of $M$ as a bundle over $\mathbb{RP}^2$. The manifold $M$ here is an ALF space of type $D_1$.} $M$ is a degree $-4$ complex line bundle over $S^2$. Utilizing the standard charts on $S^2$, $z, w:\mathbb{C}\to S^2$ with $z={1}/{w}$, we consider the following co-frame on the unit circle bundle ($e^{i\psi}\in S^1$) over $S^2$: \begin{align*} \sm^1 &= \oh\left({\dd\psi} + 2i\frac{z\dd\br{z} - \br{z}\dd z}{1+|z|^2}\right) ~, &\sm^2 &= \re\left[\frac{2\,e^{i\ang}\,\dd z}{1+|z|^2}\right] ~, &\sm^3 &= \im\left[\frac{2\,e^{i\ang}\,\dd z}{1+|z|^2}\right] ~. \end{align*} Although there is ambiguity in the definitions of $\sm^2$ and $\sm^3$, $(\sm^2)^2$, $(\sm^3)^2$ and $\sm^2\w\sm^3$ are well-defined. In particular, $(\sm^2)^2+(\sm^3)^2=\frac{4 |\dd z|^2}{(1+|z|^2)^2}$ represents the standard round metric of constant Gauss curvature $1$ on $S^2$. The $1$-forms $\sm^1, \sm^2$ and $\sm^3$ satisfy the relation $\dd\sm^1 = \sm^2\w\sm^3$, and its cyclic permutations. On the other chart, $(w,\vph) = (1/z,\psi + 4\arg z)$. The Riemannian metric on $M$ takes the following form \begin{align} \dd s^2 &= \dd r^2 + a^2(\sm^1)^2 + b^2(\sm^2)^2 + c^2(\sm^3)^2 \label{metric1} \end{align} where $a,b,c$ are functions in $r\in [0, \infty)$. Denoting by prime $(~)'$ the derivative with respect to $r$, these \emph{coefficient functions} $a$, $b$, and $c$ are determined by the following system of ODE's: \begin{align} {a'} &= \frac{a^2-(b-c)^2}{2bc} ~, &{b'} &= \frac{b^2-(c-a)^2}{2ca} ~, &{c'} &= \frac{c^2-(a-b)^2}{2ab} ~, \label{ODE1} \end{align} with the initial conditions $a(0)=0$, $b(0) =- m$, and $c(0)=m$ for a positive constant $m$. The manifold is oriented by $\dd r\w\sm^1\w\sm^2\w\sm^3$. The metric is complete and the variable $r$ is the geodesic distance to the zero section ($r=0$) with respect to \eqref{metric1}. The zero section, $r=0$, is a $2$-sphere denoted by $\Sigma$ and oriented by $\sm^2\w\sm^3$. The induced metric is round of radius $m$. $\Sigma$ is the minimal sphere referred in the title of this paper. Here are some other basic properties of the coefficient functions; see \cite[ch.10 and 11]{ref_AH}. When $r>0$, $a$ and $c$ are positive; $b$ is \emph{negative}. Moreover, $a'$, $b'$ and $c'$ are all positive. The explicit forms of these functions can be found after a change of variable \cite[Theorem 11.18]{ref_AH}. However, the explicit forms are not needed in this paper. The key to solve for the explicit solution of \eqref{ODE1} is to rewrite the equations as \begin{align} (ca+ab)' &= \frac{2}{abc}(ca)(ab) ~, & (ab+bc)' &= \frac{2}{abc}(ab)(bc) ~, & (bc+ca)' &= \frac{2}{abc}(bc)(ca) ~. \label{ODE2} \end{align} The logarithmic derivative of Jacobi theta functions obey the same equations, up to the factor $2/(abc)$. Hence, the solution can be constructed from elliptic integrals. \subsection{The geometry near the zero section $\Sigma$} \label{sec_nearby} It is useful to write down the series expansions of the coefficient functions at $r = 0$. With the initial condition $a(0)=0$, $-b(0) = m = c(0)$, one deduces from \eqref{ODE1} that \begin{align} a(r) &= 2r - \frac{1}{2m^2}r^3 + \CO(r^4) ~, &\begin{split} b(r) &= -m + \oh r - \frac{3}{8m}r^2 + \CO(r^3) ~, \\ c(r) &= m + \oh r + \frac{3}{8m}r^2 + \CO(r^3) ~. \end{split} \label{series1} \end{align} Here is an interesting point to make. The metric arises as the natural metric on the monopole moduli space \cite[ch.2 and 3]{ref_AH}, and is smooth. At first glance, it seems a little bit strange that the expansions of $b$ and $c$ have both even and odd degree terms. To see why, let \begin{align} q(r) = c(r)-b(r) \quad\text{and}\quad p(r) = c(r)+b(r) ~. \label{ds} \end{align} Note that $q(r)>0$ for any $r\geq0$, $q(0)=2m$ and $p(0)=0$. When $r>0$, \eqref{ODE2} implies that $(a\,p)'>0$, and thus $p>0$. The metric \eqref{metric1} can be rewritten as \begin{align*} \dd s^2 &= \dd r^2 + \frac{a^2}{4}\left({\dd\psi} + 2i\frac{z\dd\br{z} - \br{z}\dd z}{1+|z|^2}\right)^2 + \frac{q^2+p^2}{4}\frac{4\,|\dd z|^2}{(1+|z|^2)^2} - (2\,q\,p)\re\left[\frac{e^{i\psi}(\dd z)^2}{(1+|z|^2)^2}\right] ~. \end{align*} With aforementioned conditions, the smoothness of the metric near $r=0$ is equivalent to that $a(r)/r$, $p(r)/r$ and $q(r)$ are smooth functions in $r^2$. Equation \eqref{ODE1} in terms of $a, p$, and $q$ are \begin{align*} a' &= \frac{2(a^2-q^2)}{p^2-q^2} ~, & q' &= \frac{2q(p^2-a^2)}{a(p^2-q^2)} ~, & p' &= 2+\frac{2p(q^2-a^2)}{a(p^2-q^2)} ~. \end{align*} From these equations and the initial conditions, one derives that $a$ and $p = c+b$ are odd functions in $r$, while $q = c-b$ is an even function in $r$. \begin{rmk} This property of $a,p,q$ may not been seen in some of the radial parameters used in the literature \cite{ref_AH1,ref_GM,ref_AH}. Those parameters are good to construct the explicit form of the solution. However, at the zero section, those parameters only respect the $\CC^k$ topology for some $k\in\BN$, but not the smooth one. \end{rmk} \subsection{Connections and the ASD Einstein equation} We briefly recall the convention for connections and curvatures. For a Riemannian manifold with metric $\ip{\,}{\,}$ and Levi-Civita connection $\nabla$, our convention for the Riemann curvature tensor is \begin{align*} R(X,Y,Z,W) = \ip{\nabla_Z\nabla_W Y - \nabla_W\nabla_Z Y - \nabla_{[Z,W]} Y}{X} ~. \end{align*} Let $\{e_i\}$ be a local orthonormal frame. Denote the coefficient $1$-forms of the Levi-Civita connection by $\om_i^j$: $\nabla e_i = \om_i^j\ot e_j$. Since the frame is orthonormal, $\om_i^j = -\om_j^i$. Throughout this paper, we adopt the Einstein summation convention that repeated indexes are summed. Denote the dual co-frame by $\{\om^i\}$; the covariant derivative of the co-frame is $\nabla\om^j = -\om_i^j\ot\om^i$. It follows that \begin{align*} \dd\om^j = -\om_i^j\w\om^i ~. \end{align*} The curvature form is \begin{align} \FR_i^j = \dd\om_i^j - \om_i^k\w\om_k^j ~. \label{R_curv1} \end{align} It is equivalent to the Riemann curvature tensor by the following relation: \begin{align} \FR_i^j(X,Y) = R(e_j,e_i,X,Y) \label{R_curv2} \end{align} for any two tangent vectors $X$ and $Y$. For the Atiyah--Hitchin manifold $M$ with the Riemannian metric given by \eqref{metric1}, consider the following orthonormal co-frame: \begin{align} \om^0 &= -\dd r ~, &\om^1 &= a\,\sm^1 ~, &\om^2 &= b\,\sm^2 ~, &\om^3 &= c\,\sm^3 ~. \end{align} Note that $\om^0\w\om^1\w\om^2\w\om^3$ is the positive orientation. Their exterior derivatives are \begin{align*} \dd\om^0 = 0 ~,\quad \dd\om^1 = -\frac{a'}{a} \om^0\w\om^1 + \frac{a}{bc}\om^2\w\om^3 ~, \end{align*} and the equations for $\dd\om^2$ and $\dd\om^3$ are similar. It follows that \begin{align} \begin{split} \om_0^1 &= -\frac{a'}{a}\om^1 ~, \\ \om_2^3 &= -\oh\frac{b^2+c^2-a^2}{abc}\om^1 ~, \end{split} & \begin{split} \om_0^2 &= -\frac{b'}{b}\om^2 ~, \\ \om_3^1 &= -\oh\frac{a^2+c^2-b^2}{abc}\om^2 ~, \end{split} & \begin{split} \om_0^3 &= -\frac{c'}{c}\om^3 ~, \\ \om_1^2 &= -\oh\frac{a^2+b^2-c^2}{abc}\om^3 ~. \end{split} \label{conn1} \end{align} It is known that on a simply-connected $4$-manifold, the hyper-K\"ahler condition is equivalent to $0 = \FR_0^1 + \FR_2^3 = \FR_0^2 + \FR_3^1 = \FR_0^3 + \FR_1^2$. In terms of the curvature decomposition in four dimensions, this means that only the anti-self-dual Weyl curvature could be non-zero. Note that for $(i,j,k) = (1,2,3)$ and its cyclic permutation, \begin{align*} \FR_0^i + \FR_j^k = \dd(\om_0^i + \om_j^k) + (\om_0^j + \om_k^i)\w(\om_0^k + \om_i^j) ~, \end{align*} and thus vanishes if \begin{align} \om_0^i + \om_j^k = -\sm^i ~. \label{asd} \end{align} From \eqref{conn1}, this condition is exactly the equation \eqref{ODE1}. One can compare with the case of the Eguchi--Hanson metric, where $\om_0^i + \om_j^k$ vanishes. See, for example, \cite[Section 2]{ref_TsaiW}. \subsection{Hyper-K\"ahler structure} Recall that the hyper-K\"ahler structure is characterized by the existence of three linearly independent parallel self-dual $2$-forms. With the orientation $\om^0\w\om^1\w\om^2\w\om^3$, the space of self-dual 2-forms $\Ld^2_+$ is spanned by $\om^0\w\om^1 + \om^2\w\om^3$, $\om^0\w\om^2 + \om^3\w\om^1$, and $\om^0\w\om^3 + \om^1\w\om^2$. From \eqref{asd}, the Levi-Civita connection on $\Ld^2_+$ reads: \begin{align} \nabla(\om^0\w\om^1 + \om^2\w\om^3) &= - \sm^3\ot(\om^0\w\om^2 + \om^3\w\om^1) + \sm^2\ot(\om^0\w\om^3 + \om^1\w\om^2) ~, \label{asd1} \\ \nabla(\om^0\w\om^2 + \om^3\w\om^1) &= \sm^3\ot(\om^0\w\om^1 + \om^2\w\om^3) - \sm^1\ot(\om^0\w\om^3 + \om^1\w\om^2) ~, \notag \\ \nabla(\om^0\w\om^3 + \om^1\w\om^2) &= -\sm^2\ot(\om^0\w\om^1 + \om^2\w\om^3) + \sm^1\ot(\om^0\w\om^2 + \om^3\w\om^1) ~. \notag \end{align} We proceed to find three linearly independent parallel self-dual 2-forms. Consider the following parametrization of $\RSO(3)$: \begin{align*} S = \frac{1}{1+|z|^2}\begin{bmatrix} 2\re(z) & \im(e^{-i\ang}+e^{i\ang}z^2) & \re(e^{-i\ang}-e^{i\ang}z^2) \\ 2\im(z) & -\re(e^{-i\ang}+e^{i\ang}z^2) & \im(e^{-i\ang}-e^{i\ang}z^2) \\ 1-|z|^2 & 2\im(e^{i\ang}z) & -2\re(e^{i\ang}z) \end{bmatrix} ~. \end{align*} The Maurer--Cartan form is \begin{align*} S^{-1}\dd S = \begin{bmatrix} 0 & \sm^3 & -\sm^2 \\ -\sm^3 & 0 & \sm^1 \\ \sm^2 & -\sm^1 & 0 \end{bmatrix} ~, \end{align*} which is exactly the connection $1$-form in terms of the basis $\{\om^0\w\om^1 + \om^2\w\om^3,\om^0\w\om^2+\om^3\w\om^1,\om^0\w\om^3+\om^1\w\om^2\}$. Three parallel self-dual $2$-forms can be obtained by pairing the row vectors of $S$ with the above basis. It is easier to use the following expressions: \begin{align} \om^0\w\om^1 + \om^2\w\om^3 &= -a\,\dd r\w\sm^1 + \frac{p^2-q^2}{4}\frac{2i\,\dd z\w\dd\br{z}}{(1+|z|^2)^2} ~, \label{cal} \\ (\om^0\w\om^2 + \om^3\w\om^1) + i(\om^0\w\om^3 + \om^1\w\om^2) &= \frac{(p\,e^{i\ang}\,\dd z - q\,e^{-i\ang}\,\dd\br{z})\w(\dd r - ia\,\sm^1)}{1+|z|^2} \notag \end{align} where $p$ and $q$ are defined by \eqref{ds}. Then, the $[\text{3rd row}]$ of $S$ gives \begin{align} \begin{split} & \frac{1-|z|^2}{1+|z|^2}\left[ \frac{(p^2-q^2)}{4}\frac{2i\,\dd z\w\dd\br{z}}{(1+|z|^2)^2} - a\,\dd r\w\sm^1 \right] \\ &\quad - 2\im\left[ \frac{\br{z}\,\dd z\w\left(p\,(\dd r - ia\,\sm^1)\right) - q\,\br{z}\,\dd\br{z}\w\left(e^{-i\psi}\,(\dd r - ia\,\sm^1)\right)}{(1+|z|^2)^2} \right] ~, \end{split} \label{kform} \end{align} and $[\text{1st row}] + i\,[\text{2nd row}]$ gives \begin{align} \begin{split} & \frac{2z}{1+|z|^2}\left[ \frac{(p^2-q^2)}{4}\frac{2i\,\dd z\w\dd\br{z}}{(1+|z|^2)^2} - a\,\dd r\w\sm^1 \right] \\ &\quad - i\frac{\dd z\w\left(p\,(\dd r - ia\,\sm^1)\right) - q\,\dd\br{z}\w\left(e^{-i\psi}\,(\dd r - ia\,\sm^1)\right)}{(1+|z|^2)^2} \\ &\qquad + i\frac{q\,z^2\dd z\w\left(e^{i\psi}\,(\dd r + ia\,\sm^1)\right) - z^2\,\dd\br{z}\w\left(p\,(\dd r + ia\,\sm^1)\right)}{(1+|z|^2)^2} ~. \end{split} \label{hvform} \end{align} Recall that $a(r) = 2r + r^{\text{odd}}$, $p(r) = r + r^{\text{odd}}$ and $q(r) = 2m + r^{\text{even}}$ near $r=0$. It follows that the $2$-forms \eqref{kform} and \eqref{hvform} are indeed smooth. From \eqref{kform} and \eqref{hvform}, one sees that the restrictions of the 2-forms to the zero section $\Sigma$ become \begin{align*} \frac{1-|z|^2}{1+|z|^2}\left[ \frac{-2im^2\,\dd z\w\dd\br{z}}{(1+|z|^2)^2}\right] \quad\text{and}\quad \frac{2z}{1+|z|^2}\left[ \frac{-2im^2\,\dd z\w\dd\br{z}}{(1+|z|^2)^2}\right] ~, \end{align*} and thus $\Sigma$ is the ``twistor" sphere. Namely, it is the parameter space of the K\"ahler forms. The restriction of any K\"ahler form on $\Sigma$ has zero total integral. The homology class $[\Sigma]$ is a Lagrangian class with respect to any K\"ahler form. Denote the complex structure corresponding to the self-dual $2$-form given by the $[i\text{-th row}]$ of $S$ by $J_i$, and the complex structure on $\Sm$ by $J_{S^2}$. By regarding the embedding of $\Sm$ as a map $u:S^2\to M$, the above computation shows that $J_i\circ\dd u = -x_i\,\dd u \circ J_{S^2}$, where $x_1, x_2$, and $x_3$ are the standard coordinate functions on $S^2$ satisfying $x_1+ix_2 = 2z/(1+|z|^2)$ and $x_3 = (1-|z|^2)/(1+|z|^2)$. In particular, the map $u$ obeys \begin{align} \dd u\circ J_{S^2} &= - x_1\,J_1\circ\dd u - x_2\,J_2\circ\dd u - x_3\,J_3\circ\dd u ~. \label{qtn} \end{align} \subsection{Curvatures} We compute the curvature components of $M$ in this section. Recalling the formula of the $\FR_0^1$ component \begin{align*} \FR_0^1= \dd\om_0^1-\om_0^2\wedge \om_2^1-\om_0^3\wedge\om_3^1 \end{align*} and substituting the connection forms from \eqref{conn1}, we derive \[\FR_0^1=\frac{a''}{a}\, \om^0\w\om^1 - \kp(a,b,c) \, \om^2\w\om^3 ~,\] where $\kp(a, b, c)$ is defined by \begin{align} \kp(a,b,c)\equiv \frac{1}{2(abc)^2} \left[ 2a^4 - a^2(b-c)^2 - a^3(b+c) + a(b-c)^2(b+c) - (b+c)^2(b-c)^2 \right] ~. \label{curvf} \end{align} On the other hand, from \eqref{ODE1}, it can be checked that ${a''}/{a}=\kp(a,b,c)$, or $R_{1001} = R_{2301}$, a fact that can be derived alternatively from the hyper-K\"ahler condition. One verifies directly that $\kp(a,b,c) = \kp(a,c,b)$ and $\kp(a,b,c) + \kp(c,a,b) + \kp(b,c,a) = 0$. Due to the formal cyclic symmetry of $(a,b,c)$, all the non-trivial components of the Riemann curvature tensor are listed as follows (up to the symmetry of the curvature tensor). \begin{align} \begin{cases} R_{1001} = R_{2301} = R_{2332} = \kp(a,b,c) = \displaystyle\frac{a''}{a} ~, & \medskip\\ R_{2002} = R_{3102} = R_{3113} = \kp(b,c,a) = \displaystyle\frac{b''}{b} ~, & \medskip \\ R_{3003} = R_{1203} = R_{1221} = \kp(c,a,b) = \displaystyle\frac{c''}{c} ~. & \end{cases} \label{curv} \end{align} \subsection{Totally geodesic surfaces} \label{sec_surface} In \cite[ch.7 and 12]{ref_AH}, two kinds of totally geodesic surfaces are introduced to study the geodesics of the ambient space \cite[ch.13]{ref_AH}. \begin{enumerate} \item In the formulation here, the first kind is the fiber of the $-4$-bundle. For example, set $z = 0$. The induced metric is $\dd r^2 + \frac{a^2}{4}\dd\psi^2$. \item The second kind is topologically a cylinder. For instance, consider $(r\,e^{i\psi},z) = (s\,e^{-2i\ta}, e^{i\ta})$ for $(s,e^{i\ta})\in\BR\times S^1$. The induced metric is $\dd s^2 + c^2\dd\ta^2$ for $s>0$, and $\dd s^2 + b^2\dd\ta^2$ for $s<0$. One may also take the $S^1$-factor to be the great circle, $\{\im z = 0\}$ or $\{\re z = 0\}$, and take the $\BR^1$-factor to be a line on the $re^{i\psi}$-plane with suitable direction. \end{enumerate} Each of the above examples is holomorphic with respect to some complex structure. The readers are directed to \cite{ref_AH} for more discussions. \section{Geometric properties of the minimal sphere} \subsection{Strong stability} The Jacobi operator of the volume functional on a minimal submanifold is $\CJ = (\nabla^\perp)^*\nabla^\perp + \CR -\CA$. The concrete form of the zeroth order part is \begin{align*} (\CR-\CA)(V) &= \sum_{\mu,\nu}\left[ -\sum_{\ell} R_{\ell\mu\ell\nu} V^\mu - \sum_{\ell,k}h_{\mu\ell k}h_{\nu\ell k}V^\mu \right]e_\nu \end{align*} on a normal vector $V = \sum_\mu V^\mu e_\mu$. Here, $k,\ell$ are indices for the orthonormal frame of the tangential part, and $\mu,\nu$ are for the normal part. In \cite[Definition 3.1]{ref_TsaiW2}, a minimal submanifold is said to be strongly stable if $\CR - \CA$ is \emph{pointwise} positive definite. It is clear that strong stability implies strict stability, i.e.\ $\CJ$ is a positive operator. In \cite[Proposition 5.5]{ref_MW}, the minimal sphere $\Sm$ is shown to be strictly stable. We show that it is indeed strongly stable. \begin{prop} \label{prop_sstable} The minimal sphere $\Sm$ in the Atiyah--Hitchin manifold is strongly stable. \end{prop} \begin{proof}[Proof 1: direct computation] Note that the indices $2,3$ are tangential directions, and $0,1$ are normal directions. According to \eqref{series1} and \eqref{conn1}, the components of its second fundamental form are \begin{align*} \frac{1}{2m} = -h_{022} = h_{033} = h_{123} = h_{132} \qquad\text{and}\qquad 0 = h_{023} = h_{032} = h_{122} = h_{133} ~. \end{align*} In \cite[Remark on p.37]{ref_AH}, Atiyah and Hitchin showed that $\Sm$ is not a totally geodesic by representation theory. By plugging \eqref{series1} into \eqref{curvf}, \begin{align} \kp(a,b,c) = -\frac{3}{2m^2} \qquad\text{and}\qquad \kp(b,c,a) = \kp(c,a,b) = \frac{3}{4m^2} \quad\text{ at } r=0 ~. \label{curv0} \end{align} With \eqref{curv}, the components of $\CR-\CA$ are as follows. \begin{align*} -\sum_{j=2}^3 R_{j0j0} - \sum_{j,k=2}^3 h_{0jk}h_{0jk} &= R_{2002} + R_{3003} - (h_{022})^2 - (h_{033})^2 = \frac{1}{m^2} ~, \\ -\sum_{j=2}^3 R_{j1j1} - \sum_{j,k=2}^3 h_{1jk}h_{1jk} &= R_{2112} + R_{3113} - (h_{123})^2 - (h_{132})^2 = \frac{1}{m^2} ~, \end{align*} and the off-diagonal part vanishes. Clearly, $\CR-\CA$ is positive definite. \end{proof} There is a calculation-free argument. Here is the brief explanation. \begin{proof}[Proof 2: special Lagrangian type argument] Although the minimal sphere can never be (special) Lagrangian, the argument in \cite[Appendix A.1]{ref_TsaiW2} works as well. Note that for any $p\in\Sm$, $T_p\Sm$ is a special Lagrangian plane with respect to \emph{some} Calabi--Yau structure. For instance, when $|z|=1$, $T_p\Sm$ is Lagrangian with respect to \eqref{kform}. Its phase with respect to \eqref{hvform} is basically $\arg z$. The computation in \cite[Appendix A.1]{ref_TsaiW2} is tensorial. By using the complex structure determined by the holomorphic volume form \eqref{hvform}, the computation works at \emph{any} point with $|z|=1$. Since $\Sm$ is the twistor sphere, the argument works everywhere on $\Sm$. It follows that $\CR-\CA$, as a linear map on the normal bundle, is a multiple of the identity map. \end{proof} By applying \cite[Theorem 6.2]{ref_TsaiW2}, the minimal sphere $\Sm$ is $\CC^1$ stable under the mean curvature flow. \begin{cor} There exists an $\vep>0$ which has the following significance. For any surface $\Gm$ satisfying $\sup_{q\in\Gm} \left( r^2 (q) + (1+(\om^2\w\om^3)(T_q\Gm)) \right) < \vep $, the mean curvature flow $\Gm_t$ with $\Gm_0 = \Gm$ exists for all time, and converges smoothly to $\Sm$ as $t\to\infty$. \end{cor} Here $r$ is considered to be the distance function to the zero section and the $2$-form $-\om^2\w\om^3$ is parallel along geodesics normal to $\Sm$ by \eqref{conn1}. \subsection{Estimates on the derivatives} In order to say some global property of the minimal sphere, a better understanding on the coefficient functions is needed. \begin{lem} \label{estimate} The coefficient functions $a$, $b$, and $c$ of the Atiyah--Hitchin metric \eqref{metric1} obey the following relation. \begin{align*} 1 > \frac{r\,a'(r)}{a(r)} > \frac{r\,c'(r)}{c(r)} > \frac{-r\,b'(r)}{b(r)} > 0 \end{align*} for any $r>0$. \end{lem} \begin{proof} This lemma can be proved easily by using the theory established in \cite[ch.9 and 10]{ref_AH}. The variable $\xi$ in \cite{ref_AH} is the geodesic distance $r$ here. The key ingredients are summarized as follows. Atiyah and Hitchin introduced the functions \begin{align*} x = \frac{a}{c} \qquad\text{and}\qquad y = \frac{b}{c} ~. \end{align*} Both $x$ and $y$ can serve as the radial coordinate. In fact, they mainly use $x$ as the variable in \cite[ch.10]{ref_AH}. At $r=0$, $(x(0),y(0)) = (0,-1)$, and $(x(r),y(r))\to(1,0)$ as $r\to\infty$. That is to say, the domain of $x$ is $[0,1)$; the domain of $y$ is $[-1,0)$. When $r>0$, the curve $(x(r),y(r))$ lies entirely in the region \begin{align} y < -1+x ~,\quad 0<x<1 ~,\quad -1<y<0 ~. \label{key} \end{align} The bound $y \leq -1+x$ is given by \cite[Lemma 10.1]{ref_AH}. From its proof, it is not hard to see that the equality only happens at $(x,y)=(0,-1)$, or $r=0$. It is also illustrative to give their expansions \eqref{series1} near $r = 0$, \begin{align*} x(r) = \frac{2}{m}r - \frac{1}{m^2}r^2 + \CO(r^3) \qquad\text{and}\qquad y(r) = -1 + \frac{1}{m}r - \frac{1}{2m^2}r^2 + \CO(r^3) ~. \end{align*} The equations \eqref{ODE1} become \begin{align*} a' &= \frac{x^2-(y-1)^2}{2y} ~, & b' &= \frac{y^2-(x-1)^2}{2x} ~, & c' &= \frac{1-(x-y)^2}{2xy} ~. \end{align*} The derivatives of $x(r)$ and $y(r)$ are \begin{align*} x' = - \frac{1}{c}\,\frac{(1-x)(1+x-y)}{y} \qquad\text{and}\qquad y' = - \frac{1}{c}\,\frac{(1-y)(1+y-x)}{x} ~. \end{align*} It follows from \eqref{key} that $b'>0$ when $r>0$. We compute \begin{align*} \frac{c'}{c} + \frac{b'}{b} &= \frac{1}{c}\,\frac{1-x+y}{y} ~, \\ \frac{a'}{a} - \frac{c'}{c} &= \frac{x'}{x} = \frac{1}{c}\,\frac{(1-x)(1+x-y)}{x(-y)} ~. \end{align*} According to \eqref{key}, both quantities are positive when $r>0$. It remains to show that $a\geq r\,a'$. With \eqref{series1}, $\frac{a}{a'} = r + \frac{1}{2m^2}r^3 + \CO(r^4)$ near $r = 0$. Hence, $\frac{a}{a'} > r$ for sufficiently small $r$. The derivative of $\frac{a}{a'} - r$ in $r$ is $\frac{a}{(a')^2}(-a'')$. By invoking \cite[Lemma 10.10]{ref_AH}, $a''<0$ when $r>0$. We will say something about their proof momentarily. To sum up, $\frac{a}{a'}-r$ is monotone increasing in $r$, and is positive for small $r$. Therefore, it must be positive for any $r>0$. This finishes the proof of this lemma. \end{proof} It follows from \eqref{curv} that \begin{align*} a'' &= a\,\kp(a,b,c) \\ &= \frac{1}{c}\,\frac{2x^4 - x^2(y-1)^2 - x^3(1+y) + x(1-y)^2(1+y) - (1-y)^2(1+y)^2}{2xy^2} \end{align*} where $\kp$ is defined by \eqref{curvf}. One can study the maximum of the numerator over the closure of \eqref{key}. It turns out that the maximum is $0$, and is achieved only at $(0,-1)$ and $(1,0)$. The argument of \cite[Lemma 10.10]{ref_AH} is cleverer. They work with \begin{align*} a'' &= \left( \frac{x}{y} + \frac{1-x^2-y^2}{2y^2}\,\frac{\dd y}{\dd x} \right) \frac{\dd x}{\dd r} ~, \end{align*} and analyze it according to whether $\frac{\dd y}{\dd x}\leq 1$ or not. The sign of $b''$ is examined in \cite[Lemma 10.19]{ref_AH}; it is negative when $r>0$. For $c''$, it is positive for small $r$, and negative for large $r$. See \cite[last paragraph on p.99]{ref_AH}. Note that the notion of convexity/concavity in \cite{ref_AH} is different from the usual one. These convexity/concavity properties are directly related to the geometry of the surfaces mentioned in section \ref{sec_surface}. \subsection{Calibration} We show that the minimal sphere is actually a minimizer of the area functional. According to J.~Lotay, this was known to M.~Micallef. The theory of calibration can be found in \cite[\S II.4]{ref_HL}. \begin{prop} The minimal sphere $\Sm$ in the Atiyah--Hitchin manifold is a calibrated submanifold. Therefore, it minimizes the area within its homology class. \end{prop} \begin{proof} The only task is to construct a closed $2$-form of comass one, whose restriction on $\Sm$ coincides with its area form. Take $\Ta = m^2\,\sm^2\w\sm^3 = \frac{-m^2}{bc}\om^2\w\om^3$. From the expression $m^2\,\sm^2\w\sm^3$, it is easy to see that $\dd\Ta = 0$ and $\Ta|_\Sm = {\rm dvol}_\Sm$. It remains to check that comass one condition. According to Lemma \ref{estimate}, $(bc)'<0$ when $r>0$. It follows that $bc \leq -m^2$ for any $r$, which implies that $\Ta$ has comass one. \end{proof} \subsection{Two-convexity of the distance function} In this section, we apply the barrier function argument to prove the rigidity of the minimal sphere in the Atiyah--Hitchin manifold. Here is a simple fact in linear algebra. \begin{lem} \label{linear} Let $Q$ be a symmetric matrix on $\BR^n$, with eigenvalues $\ld_n\geq\cdots\geq\ld_2\geq\ld_1$. Fix $k\in\{1,\cdots,n\}$. Then, the minimum of \begin{align*} \left\{\, \tr_L(Q) ~\big|~ L\subset\BR^n \text{ is a vector subspace of dimension } k \,\right\} \end{align*} is exactly $\sum_{j=1}^k\ld_j$. \end{lem} \begin{proof} Regard the domain as the Stiefel manifold. Suppose that extremum is achieved by $L$, which has orthonormal basis $\{\bv_1,\cdots,\bv_k\}$. The Lagrange multiplier equation says that $Q\bv_j\in L$ for any $j\in\{1,\ldots,k\}$. That is to say, $L$ is invariant under $Q$. This lemma follows from the standard property of symmetric matrices. \end{proof} \begin{defn} On a Riemannian manifold, a smooth function $f$ is said to be $k$-convex at a point $p$ if the sum of the smallest $k$ eigenvalues of $\Hess(f)|_p$ is positive. \end{defn} It turns out that there is a naturally defined (semi-) two-convex function on the Atiyah--Hitchin manifold. \begin{thm} \label{thm_unique} In the Atiyah--Hitchin manifold $M$, the surface $\Sm$ is the only compact minimal 2-surface. Also, there exists no compact, three-dimensional, minimal submanifold. \end{thm} \begin{proof} Consider the square of the distance function to $\Sm$ with respect to \eqref{metric1}. By \eqref{conn1}, \begin{align*} \dd r^2 &= -2r\,\om^0 ~, \\ \Rightarrow\quad \Hess(r^2) &= 2\left( \om^0\ot\om^0 + r\frac{a'}{a}\,\om^1\ot\om^1 + r\frac{b'}{b}\,\om^2\ot\om^2 + r\frac{c'}{c}\,\om^3\ot\om^3 \right) ~. \end{align*} Lemma \ref{estimate} and Lemma \ref{linear} imply that $r^2$ is two-convex when $r>0$. Another way to derive the two-convexity of $r^2$, albeit only in a tubular neighborhood of $\Sigma$, is to apply \cite[Proposition 4.1]{ref_TsaiW2}, according to which strong stability of $\Sm$ implies that there exist positive constants $\vep$ and $\dt$ such that \begin{align*} \tr_L\Hess(r^2) &\geq \dt\,r^2 \end{align*} at any point $p$ with $r\in[0,\vep)$, and any two-plane $L\subset T_pM$. This can also be proved directly by using the expansions \eqref{series1}, and switching back to the rectangular coordinate for the fibers. The rest of the argument is almost the same as that for \cite[Lemma 5.1]{ref_TsaiW}. Suppose that $N\subset M$ is a compact minimal submanifold with dimension no less than $2$. It follows from the semi-two-convextiy of $r^2$ that \begin{align*} \Delta^N(r^2|_N) = \tr_N( \Hess(r^2) ) \geq 0 ~. \end{align*} Appealing to the maximum principle, $r^2$ must be a constant on $N$. Then, $\tr_N\Hess(r^2)$ vanishes. This occurs only when $r^2$ vanishes on $N$. \end{proof} In view of the recent work of \cite{ref_LS}, the uniqueness theorem extends to the weaker setting of stationary integral varifolds. Here are some further remarks: \begin{enumerate} \item For the examples studied in \cite{ref_TsaiW}, the minimal submanifolds are totally geodesic and the corresponding $r^2$ is (semi-one-) convex. It leads to a stronger rigidity phenomenon which does not hold true in the Atiyah--Hitchin manifold. \item For small $r$, the series expansion of $\Hess(r^2)$ is derived for a general minimal submanifold in \cite[Proposition 4.1]{ref_TsaiW2}. The second fundamental form appears as the coefficients of the linear term. Unless it is a totally geodesic, $\Hess(r^2)$ cannot be semi-positive definite for small $r$. \item Bates and Montgomery \cite{ref_BM} proved that the Atiyah--Hitchin manifold admits closed geodesics, and thus cannot support any convex function. \item It can be shown that those examples of closed minimal $2$-spheres in hyper-K\"ahler K3 surfaces constructed by Foscolo \cite[Theorem 7.4]{ref_Fo} are indeed strongly stable. The distance function to such a minimal $2$-surface is locally two-convex, and thus a local uniqueness theorem can be proved for these examples. To say more, Foscolo proved that the minimal sphere still obeys \eqref{qtn}. To validate Proof 2 of Proposition \ref{prop_sstable}, it remains to check that the minimal sphere has positive Gaussian curvature. When the gluing parameter in \cite{ref_Fo} is sufficiently small, one can argue by continuity that the Gaussian curvature is still positive. \item Dancer \cite{ref_Dancer} constructed non-trivial deformations of the hyper-K\"ahler metric on $M$. Recently, G.~Chen and X.~Chen \cite{ref_CC} proved that Atiyah--Hitchin manifold and the Dancer's deformations are all the ALF-$D_1$ manifolds. When the deformation parameter is small, it can be shown that the minimal $2$-sphere persists, and is still strongly stable and locally unique. It is interesting to investigate the global uniqueness of the minimal $2$-sphere in Dancer's deformation. \item The ALF-$D_0$ manifold is the quotient of $M$ by an isometric $\BZ/2$-action. The image of $\Sm$ under the quotient map is a minimal $\BR\mathbb{P}^2$. Since the $\BZ/2$ action is isometric, the corresponding statements of Proposition \ref{prop_sstable} and Theorem \ref{thm_unique} still hold true. Namely, the minimal $\BR\mathbb{P}^2$ is strongly stable, and is globally unique. \end{enumerate} \begin{bibdiv} \begin{biblist} \bib{ref_AH1}{article}{ author={Atiyah, Michael}, author={Hitchin, Nigel}, title={Low energy scattering of nonabelian monopoles}, journal={Phys. Lett. A}, volume={107}, date={1985}, number={1}, pages={21-25}, } \bib{ref_AH}{book}{ author={Atiyah, Michael}, author={Hitchin, Nigel}, title={The geometry and dynamics of magnetic monopoles}, series={M. B. Porter Lectures}, publisher={Princeton University Press, Princeton, NJ}, date={1988}, pages={viii+134}, } \bib{ref_BM}{article}{ author={Bates, Larry}, author={Montgomery, Richard}, title={Closed geodesics on the space of stable two-monopoles}, journal={Comm. Math. Phys.}, volume={118}, date={1988}, number={4}, pages={635--640}, } \bib{ref_CC}{article}{ author={Chen, Gao}, author={Chen, Xiuxiog}, title={Gravitational instantons with faster than quadratic curvature decay (II)}, journal={}, volume={}, date={}, number={}, pages={}, eprint={arXiv:1508.07908}, url={https://arxiv.org/abs/1508.07908}, } \bib{ref_Dancer}{article}{ author={Dancer, Andrew S.}, title={Nahm's equations and hyper-K\"ahler geometry}, journal={Comm. Math. Phys.}, volume={158}, date={1993}, number={3}, pages={545--568}, } \bib{ref_Fo}{article}{ author={Foscolo, Lorenzo}, title={ALF gravitational instantons and collapsing Ricci-flat metrics on the K3 surface}, journal={}, volume={}, date={}, number={}, pages={}, eprint={arXiv:1603.06315}, url={https://arxiv.org/abs/1603.06315}, status={to appear in J. Differential Geom.}, } \bib{ref_GM}{article}{ author={Gibbons, G. W.}, author={Manton, N. S.}, title={Classical and quantum dynamics of BPS monopoles}, journal={Nuclear Phys. B}, volume={274}, date={1986}, number={1}, pages={183--224}, } \bib{ref_HL}{article}{ author={Harvey, Reese}, author={Lawson, H. Blaine, Jr.}, title={Calibrated geometries}, journal={Acta Math.}, volume={148}, date={1982}, pages={47--157}, } \bib{ref_LS}{article}{ author={Lotay, Jason D.}, author={Schulze, Felix}, title={Consequences of strong stability of minimal submanifolds}, journal={}, volume={}, date={}, number={}, pages={}, eprint={arXiv:1802.03941}, url={https://arxiv.org/abs/1802.03941}, status={‎to appear in Int. Math. Res. Notices}, } \bib{ref_MW}{article}{ author={Micallef, Mario J.}, author={Wolfson, Jon G.}, title={The second variation of area of minimal surfaces in four-manifolds}, journal={Math. Ann.}, volume={295}, date={1993}, number={2}, pages={245--267}, } \bib{ref_TsaiW}{article}{ author={Tsai, Chung-Jun}, author={Wang, Mu-Tao}, title={The stability of the mean curvature flow in manifolds of special holonomy}, journal={J. Differential Geom.}, volume={108}, date={2018}, number={3}, pages={531--569}, } \bib{ref_TsaiW2}{article}{ author={Tsai, Chung-Jun}, author={Wang, Mu-Tao}, title={A strong stability condition on minimal submanifolds and its implications}, journal={}, volume={}, date={}, number={}, pages={}, eprint={arXiv:1710.00433}, url={http://arxiv.org/abs/1710.00433}, } \end{biblist} \end{bibdiv} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,111
<resources> <item name="actionbar_compat" type="id"/> <item name="actionbar_compat_title" type="id"/> <item name="actionbar_compat_item_refresh_progress" type="id"/> <item name="actionbar_compat_item_refresh" type="id"/> <item name="menu_refresh" type="id"/> <item name="background" type="id"/> <item name="image" type="id"/> <item name="close" type="id"/> </resources>
{ "redpajama_set_name": "RedPajamaGithub" }
1,623
Maundy comes from the Latin meaning "command" and speaks of the evening of the Last Supper when Jesus gave His disciples the Great Commandment. "Love one another as I have loved you." Tenebrae means "darkness" and reminds us that Holy Thursday ended in the darkness of betrayal. The readings will take us after the supper in the upper room through the events of that dark night. We will participate in the Sacrament of Holy Communion.
{ "redpajama_set_name": "RedPajamaC4" }
9,598
Stumblinstyle.com ~ Joker Jewels Coloring Page Harley With Baseball Bat Suicide Squad Pages For Adults inspirational coloring page for kids, free coloring pages, coloring book, drawing tutorials, paper crafts, puzzle games, dot to dots, vectors and the entire coloring pages. Stumblinstyle blog magazine covering. get free coloring ideas for your kids on Joker Jewels Coloring Page Harley With Baseball Bat Suicide Squad Pages For Adults go inside chic and stylish coloring pages and get inspiration for your own dream features exclusive coloring pages content including design ideas, free printable coloring pages for kids and more.
{ "redpajama_set_name": "RedPajamaC4" }
1,107
Autobiography · Music · Opera An American Indian War Dance on the Operatic Stage in 1824 May 6, 2016 May 19, 2016 annamariabarry One of the best things about working with the autobiographies of opera singers is the curious and unexpected anecdotes that you frequently come across. Today I've been re-reading the memoirs of Henry Phillips, a popular bass singer of the early nineteenth century. His autobiography, Musical and Personal Recollections During Half a Century, was published in 1864. It's full of fascinating and unexpected stories. Hand-coloured lithograph | Henry Phillips as Count der Tiemar in 'Amilie' | Richard James Lane | London | 1839 (National Portrait Gallery, London) One which has intrigued me especially is the account of his involvement in the first ever British production of Weber's Der Freischütz in 1824. This opera, with its ghoulish and elaborate incantation scene, was to become immensely popular in nineteenth-century Britain. Phillips was still in the infancy of his career at this stage, and his casting in the opera was a major coup, as he was to appear on stage with the famous tenor John Braham. A new role had been created expressly for Phillips. This was because the managers of the Theatre Royal felt that the role of Caspar required both an experienced actor and an accomplished singer to interpret the difficult music. As no man existed who could fulfil both of these criteria, they split the role in two. Caspar was to be played by the actor Mr. Bennet, while a new character called Rollo was created to sing Caspar's parts. This role was given to the young Phillips. One of Caspar's songs that Rollo was to sing in the first act involved him convincing the character of Max to use a magic bullet. Caspar, whose soul has been forfeited to the devil, does this in the hope that he can secretly trade Max's soul for his own. The corresponding song for this part of the opera was to be sung by Philipps, who had been told that German productions involved Caspar doing some dance steps during this performance. However, he misunderstood and assumed he was required to do an elaborate dance. He explained: I had been given to understand that in Germany it was customary for Caspar to make a few uncouth steps, a sort of dancing, to the symphonies of the song, but this was so imperfectly conveyed to me, that I scarcely understood it; a something of the subject gleamed across me, and a study of the situation caused me to imagine that a man placed as [Caspar] was, inducing a fellow-huntsman to use the magic bullet to save [him] from perdition, would naturally have recourse to the most desperate means to accomplish his purpose on the instant. In other words, he imagined that his character's desperation would compel him to dance desperately and passionately. For inspiration, Phillips turned to an unusual source. He continues: There flashed across me a memory of some American Indians I had seen the season before exhibited at the Lyceum, in a melodrama written for them, in which they went through their war dance in the most excited and determined manner imaginable. I resolved to carry out the idea in the action of my song, but disguised my intention till the first evening of the performance, fearful lest the manager might object, if he observed it at rehearsal. Phillips then goes on to describe his interpretation of the American Indian war dance: The house on that eventful night was thronged with Germans; the overture encored, the curtain rose, and all proceeded quietly and well, till it came to my drinking song. Young and enthusiastic, I cared for nothing; so carrying out my intention, I danced and sang with the desperation of a man on the verge of destruction. At the termination of my song I got gloriously hissed, but there were a few demands for an encore, which being insisted on I sang it again, amid a storm of "noes" and "bravoes". Mr. Arnold [the manager] ran about tearing his hair, and exclaiming, "That young rascal has ruined the opera!" At its termination I received a storm of hisses, which resembled a shower of sky-rockets at Vauxhall. Tinted engraved print on paper | Edmund Kean Esq | G. F. Storm (engraver) & Frederick Meyer Jnr (illustrator) | London | 1827 (Victoria and Albert Museum, London) I find this fascinating: an interpretation of an American Indian war dance in a production of a German opera on the London stage! Though I've looked, I haven't been able to find anything out about the production at the Lyceum which Phillips cited as his inspiration. However, it's clear from a trawl through newspaper databases that there was a keen interest in American Indians during the 1820s; several books on this topic were advertised. Furthermore, there are several accounts of American Indians being on display in London society. There are, for instance, reports of Indian chiefs being present at a society party just a few months after the production of Der Freischütz. On this occasion they performed a war dance for fellow guests. It was also in the 1820s that the famous actor Edmund Kean toured Canada, where he met four chiefs from the Huron tribe in Quebec. Kean presented them each with a medal and in return they made him a member of their tribe, giving him the name Alanienouidet. He was immensely proud of this honour, and often gallivanted around in full native dress. A portrait of him in this guise still exists in the collections of the V&A. Lithograph | Der Freischütz (incantation scene) | Artist uknown | London | 1825 (Victoria and Albert Museum, London) After the lukewarm reception of Phillips' dance, the rest of the production fared no better. In the famous supernatural scene only one of the giant owl's wings flapped, the skeleton horses and dogs got stuck in the middle of the stage, and the fire was held too close to the nose of the devil, which gave him a coughing fit. Phillips remembered: The manager objected in strong terms for my mania for dancing, but it was of no use. I stuck to it like [ballet master] Oscar Byrne, dance I would, and dance I did. Eventually audiences came around, and this scene came to be one of the most popular parts of the opera. Phillips says it was greeted with enthusiastic applause and nightly encores. In fact, it was to prove so popular that future British productions of the opera also included a wild dance scene: I created a dance peculiar to myself, which was afterwards imitated when the opera was transplanted to Drury Lane and Covent Garden, but which fell, like all copies, far short of the mode in which I had imitated the wild and unrestrained beings from whom I took my idea. How little I thought, while viewing the uncouth and extraordinary gestures of those rude Indians, that learning their accomplishments would lay the foundation-stone of my future reputation. Some years later Phillips found himself on tour in Vicksburg, Mississippi. While he was in town, 2,000 members of the Chocktaw tribe happened to be camping outside of the city. Phillips passed through their camp and spent some time in the home of the Chief, whose mother had been Scottish. As he was therefore able to speak English, he had been elevated to Chief as he could negotiate with the American authorities. Phillips vividly described the scene he encountered in the camp: I shall ever remember a view I had of [the Indians]. An old woman of the tribe, very stout, covered with silver and Indian jewels, her grey hairs streaming down to her waist, was surrounded by 3 rows of young Indians, hand in hand, dancing and singing. In the centre figures this woman, who turned one way, while the rows of men moved in a contrary direction, joining her song at intervals and dancing with the wildest gestures. Myself and many about me, feared this was a sort of war prelude to an attack on the inhabitants. However, it soon became evident that the old lady, having taken a little more than was her usual habit, was merely displaying her Bacchanalian joy at its effect, which the young Indians encouraged for the sake of a frolic. Phillip's accounts of his time touring America are filled with similarly vivid and extraordinary anecdotes. I'll be blogging about more of these in the future! Previous Post eBay: The Forgotten Archive? Next Post Review | Pre-Raphaelites: Beauty and Rebellion One thought on "An American Indian War Dance on the Operatic Stage in 1824" Pingback: Dragonetti & His Dolls: The Musician Who Married a Mannequin – Anna Maria Barry
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,026
By buying gas on Fridays you're wasting money (explanation to come on Wednesday, July 4). Join True North No Gas Fridays and hit back at Big Oil price gouging. When enough drivers make the point that they're "mad as hell and won't take it anymore" Governments will act. You can count on it. Protect yourself with True North No Gas Fridays. The best version of our national anthem that I've ever heard was produced by a school choir at First Avenue Public School in Ottawa. I've never heard it live but every time I hear a recording made by these elementary level children I feel a thrill of pride. It took an enormous amount of talent by the music teacher to bring out such quality — because I think we've got one of the dullest national anthems in the world. Now hold the umbrage. I've got a couple of words to add that will transform the anthem and cause you to forgive my audacity. — 672 words. A simple injection may boost failing hearts and improve the quality of life for millions of people with heart disease. The new treatment creates new blood vessels from stem cells that replace damaged ones in the heart. — 219 words. Nissan CEO Carlos Ghosn said Wednesday his company is working hard to develop the next generation of smaller, lighter auto batteries — a technology that holds promise for electric cars as well as for hybrids. — 613 words. Palestinian children line up with buckets to take cooked food back to their extended families. HAWARA, West Bank: A new code was born here overnight. No one, it seems, belongs to Hamas in the West Bank anymore. They are all now "Islamists," a word that neatly, and maybe more safely, shears the political from the religious amid the uncertainty of a Palestinian people newly divided. — 1,321. • Opium is the main ingredient for heroin. — 634 words. SINGAPORE (Reuters) — The property arm of the Government Investment Corp of Singapore is looking to move into Russia and Turkey but is "extremely careful" in the London office market where prices have soared. — 517 words. Anyone who follows technology or military affairs has heard the predictions for more than a decade: Cyberwar is coming. — 977 words. MOSCOW — In a bizarre dispute hearkening back to the rhetoric of the Stalin-era purges, the Communist Party's webmaster has been accused by fellow party members of hatching a Trotskyist conspiracy. — 392 words. Pavel Aptekar is an historian and a commentator for Vedomosti, where this essay appeared. June 12 was the 70th anniversary of the execution of the members of the "Red Army military-fascist conspiracy" — Marshal Mikhail Tukhachevsky, army commanders Ieronim Uborevich, Avgust Kork, Yona Yakir and other Soviet military leaders. In many ways it is this people think about when they hear the phrase the "Great Terror." — 1,221 words. In the early 1980s, with negotiations on Hong Kong's reversion to Chinese sovereignty in 1997 under way, Swire Group, one of the oldest and biggest British conglomerates here, made a crucial choice. It decided its future lay with China. — 1,340 words. There is no end to the magnetic attraction of secrecy on government officials. So it is a healthy sign of democratic self-correction when the code of secrecy is set aside, as it was Tuesday, June 26, when, at the behest of CIA Director Michael Hayden, the agency released 693 pages of declassified files on CIA abuses from the 1950s to the 1970s. Among these were a plot to assassinate Fidel Castro, subjecting unwitting subjects to LSD and the wiretapping of journalists. — 396 words. MOSCOW — A man formerly held in the U.S. detention facility in Guantanamo Bay, Cuba, was killed Wednesday in a shootout with security agents in Kabardino-Balkariya, the Federal Security Service said. — 235 words. It was a sunny day in London, and newly elected Prime Minister Tony Blair was sharing a press conference with his American friend and mentor President Bill Clinton. The future of progressivism was the topic of the day, and the two leaders bathed each other in compliments. — 801 words. Distrust of the United States has intensified across the world, but overall views of America remain very or somewhat favorable among majorities in 25 of 47 countries surveyed in a major international opinion poll, the Pew Research Center reported Wednesday. — 1,147 words. It's impossible to satisfy "Rage Boy" and his ilk. It's stupid to try. If you follow the link, you will be treated to some scenes from the strenuous life of a professional Muslim protester in the Kashmiri city of Srinagar. Over the last few years, there have been innumerable opportunities for him to demonstrate his piety and his pissed-offness. And the cameras have been there for him every time. — 1,016 words. Keep going straight west; you can't miss it. The first persons to drive across Canada were Thomas Wilby and F.V. Haney in 1912. Although some parts of the country didn't even have roads then, the two made the trip in 52 days.
{ "redpajama_set_name": "RedPajamaC4" }
3,205
Welcome home to the beautiful tree lined streets of West Garden Grove. Located one block from Seal Beach this home has elegance, upgrades and charm. Approaching the home you are greeted with a rare pull through double apron driveway complete with fresh hardscaping, custom concrete, stacked stone and shimmering flagstone on the threshold. The stunning curb appeal continues with an inviting "Dutch" front door that allows for half the door to open and let in the coastal breeze. Once in the home, the amazement continues with large "wood-look" tile on a trendy herringbone pattern throughout the open concept living room and kitchen. Over $100,000 spent in upgrades on this gorgeous home. Custom cabinetry adorns the kitchen with tons of storage, pull out shelves and large pantry. Stainless steel appliances, modern light fixtures, under cabinet lighting and quartz countertops add to the style and sophistication of this move-in ready home. No detail has been missed, even the garage has been finished with full drywall, additional storage in the rafters with a pull down ladder, epoxy floor, custom garage cabinets with quartz counters. Both bathrooms have been meticulously remodeled and feature Tommy Bahama tile. The upgrades continue with smooth ceilings, recessed lighting, crown molding, dual pane vinyl windows, and forced heat and Air Conditioning. Located within the top rated GGUSD school district, close to shopping and freeway close for the commute, this home won't last long!
{ "redpajama_set_name": "RedPajamaC4" }
4,715
LAHORE - Foreign Minister Shah Mehmood Qureshi on Monday asked India not to delay the opening of the Kartarpur Corridor and settle all differences through bilateral dialogue. Addressing a press conference flanked by Punjab Governor Chaudhry Mohammad Sarwar here at the Governor House, he said despite tension with India in the wake of Pulwama incident, Pakistan participated in the talks on Kartarpur Corridor. Pakistan wanted to move forward and improve its relations with India, he added. India, he said, should not to shy away from the Kartarpur Corridor talks scheduled for April 2 (Tuesday). Pakistan was more than willing to address all the Indian reservations, he added. He said Pakistan had adopted a principled stance that the Kartarpur Corridor should be opened for the Sikh community residing all over the world. Pakistan desired to open more border crossings for pilgrims from India, he added. The foreign minister said Finance Minister Asad Umar would succeed in making the economy stable soon. The country was on the verge of bankruptcy when Prime Minister Imran Khan took over the office and now the national economy was recoverning due to timely financial assistance received from the friendly and brotherly countries, he added. Qureshi said Asad Umar had managed to get concessions from the International Monetray Fund (IMF), which was putting up tough conditions To a question, the minister dispelled the impression that the 18th Amendment was being done away with. The Federal Government to had asked the provinces to reconsider their financial needs, he added. The Prime Minister wanted the provinces to give viable financial resources to the Centre in the 7th National Finance Commission Award, he added. About PTI leader Jahangir Khan Tareen, he said he should continue to serve the nation, but in 'his personal capacity'. Qureshi advised Pakistan Peoples Party Chairman Bilalwal Bhutto Zardari should get rid of the corrupt elements in the party if he desired to revive his party in Punjab. Yusuf Raza Gilani was rejected by the people of Multan for his alleged corruption while his two sons and candidates had lost 2018 general elections and 2019 by-polls, he added. Recalling his own close ties with Shaheed Benazir Bhutto while being in the PPP, Qureshi said Bilawal should not allow Gilani to sit on the stage and address the workers on the occasion of death anniversary of PPP founder Shaheed Zulfikar Ali Bhutto. Gilani along with many others was responsible for the fall of PPP in Punjab, he added. With such corrupt people on his side, Bilawal should not dream of a train march in Punjab like he had undertaken in Sindh, he said. The foreign minister asked Bilawal Bhutto Zardari to shun prejudices and avoid doing politics on non-issues. The Benazir Income Support Programme (BISP) had a limited scope while Prime Minister Imran Khan's 'Ehsas' programme was wider in scope covering more areas, he added. He said Bilawal Bhutto might ask for another survey if he was not satisfied but should not oppose a programme for political gains. Qureshi said the victory of PTI candidate in the PP-218 Multan by-polls in PP-218 by-polls was a proof that the masses had rejected the politics of self-interest by the PPP and the Pakistan Muslim League-Nawaz (PML-N). Both the parties had launched a joint candidate to defeat the PTI in the by-polls, but they failed. He said all the opposition parties had become politically and ideologically bankrupt. Once at the daggers drawn, they had joined hands against the PTI, but they failed in their mission as the masses gave their verdict in favour of the PTI candidate. He said the PTI had gained voters' support despite negative propaganda by the opposition parties. The foreign minister also condoled the death of Sardar Fateh Buzdar, three-time Member Punjab Assembly (MPA) and father of Punjab Chief Minister Sardar Usman Buzdar. In the beginning, Governor Punjab felicitated the foreign minister on the victory in the Multan by-polls.
{ "redpajama_set_name": "RedPajamaC4" }
2,818
{"url":"https:\/\/stats.stackexchange.com\/questions\/473627\/when-does-a-linear-regression-stop-being-a-good-fit","text":"# When does a Linear regression stop being a good fit?\n\nI'm testing a couple of hypotheses and I have a non normally distributed continuous response variable (residuals are not normally distributed as well).\n\nI have been given mixed suggestion on what model to use. Some say to use a Linear model regardless of the distribution, whereas others say that a Gamma model with a log link is a better fit.\n\nI have 4 independent variables, 3 continuous and 1 nominal.\n\nShould I go for a Gamma model? I haven't seen many papers using the gamma model so I am wondering whether it has some significant benefits over the simple linear models.\n\nhere are the scatterplots of the residuals:\n\n\u2022 Distribution is not as important as you make it sound. The nature of relationship is the most important, the regressors, the functional form of variables etc. Sort out what matters first \u2013\u00a0Aksakal Jun 23 '20 at 14:37\n\u2022 if you are considering a linear model then it's $y=X\\beta+\\varepsilon$, all components are important, but the main concern is whether $X\\beta$ captures the main relationships, i.e. whether X has relevant variables and y depends on them linearly. if you're certain this part is taken care of, then you move to looking other stuff like distribution of $\\varepsilon$, it's usually not so important anyways \u2013\u00a0Aksakal Jun 23 '20 at 15:00\n\u2022 The coefficient of determination is one thing you could look at: how close to $1$ is it? But the other thing, perhaps more importantly, is what you've already done: plot the residuals. Do you see any pattern in the residuals? If so, linear regression might be missing something important. If the residuals look more random, that would be a clue that your linear regression is doing reasonably well. \u2013\u00a0Adrian Keister Jun 23 '20 at 15:37\n\u2022 The error in the estimate of the (conditional) mean will approach a normal distribution centered around the true conditional mean (independent from the underlying error distribution as long as it has finite variance). Linear regression will do sufficiently well if you have many points. More fancy distributions can improve the estimates, however this is mostly when you have few points or heteroscedasticity (and the heteroscedasticity would actually not matter much is, as Aksakal mentions your underlying deterministic relationship is right). \u2013\u00a0Sextus Empiricus Jun 24 '20 at 6:01\n\u2022 But possibly you wish to not describe\/estimate the mean and instead some other population parameter. For instance, in your case with several dominant upwards outliers, I can imagine that the median might be more meaningful. It will depend on what relationship you wish to model. If you have a mechanistic model or some intuition about the population parameters, then you may have a preference for using the more meaningful parameters. An example how it can change: the mean of a population will be different after a logarithmic transformation. So $\\mu \\neq e^{\\overline{\\log(x)}}$ \u2013\u00a0Sextus Empiricus Jun 24 '20 at 6:08","date":"2021-05-07 00:39:10","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6079260110855103, \"perplexity\": 443.9353197526187}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243988774.18\/warc\/CC-MAIN-20210506235514-20210507025514-00131.warc.gz\"}"}
null
null
Q: Wrong order of execution async handler I have small app that execute requests to external service. final app = Alfred(); app.post('/ctr', (req, res) async { // complex tender request var data = await req.body; await complexTendexAPIRequest(data as Map<String, dynamic>); print('Hello World'); await res.json({'data': 'ok'}); }); Handler code: complexTendexAPIRequest(Map<String, dynamic> data) async { print('Request: $data'); try { final response = await http.post( Uri.parse(COMPLEX_URL), headers: {'Content-Type': 'application/json', 'Authorization': 'bearer $ACCESS_TOKEN'}, body: json.encode(data) ); if(response.statusCode == 200) { var res = json.decode(response.body); int latestId = res['id']; String url = 'https://api.ru/v2/complex/status?id=$latestId'; stdout.write('Waiting for "complete" status from API: '); Timer.periodic(Duration(seconds: 1), (timer) async { final response = await http.get( Uri.parse(url), headers: {'Content-Type': 'application/json', 'Authorization': 'bearer $ACCESS_TOKEN'} ); var data = json.decode(response.body); if(data['status'] == 'completed') { timer.cancel(); stdout.write('[DONE]'); stdout.write('\nFetching result: '); String url = "https://api.ru/v2/complex/results?id=$latestId"; final response = await http.get( Uri.parse(url), headers: {'Content-Type': 'application/json', 'Authorization': 'bearer $ACCESS_TOKEN'} ); stdout.write('[DONE]'); var data = prettyJson(json.decode(response.body)); await File('result.json').writeAsString(data.toString()); print("\nCreating dump of result: [DONE]"); } }); } else { print('[ERROR] Wrong status code for complex request. StatusCode: ${response.statusCode}'); } } on SocketException catch(e) { print('No Internet connection: $e'); } on TimeoutException catch(e) { print('TenderAPI Timeout: $e'); } on Exception catch(e) { print('Some unknown Exception: $e'); } } But output is very strange it's look like it's do not waiting complexTendexAPIRequest completion and go forward: Waiting for "complete" status from API: Hello World [DONE] Fetching result: [DONE] Creating dump of result: [DONE] But should be: Waiting for "complete" status from API: [DONE] Fetching result: [DONE] Creating dump of result: [DONE] Hello World I suppose that reason can be in Timer.periodic but how to fix it to get expected order and execution of: print('Hello World'); await res.json({'data': 'ok'}); only after complexTendexAPIRequest completed. upd: I rewrote code to while loop: https://gist.github.com/bubnenkoff/fd6b4f0d7aeae7007680e7902fbdc1e9 it's seems that it's ok. Alfred https://github.com/rknell/alfred A: The problem is the Timer.periodic, as others have pointed out. You do: Timer.periodic(Duration(seconds: 1), (timer) async { // do something ... }); That sets up a timer, then immediately continues execution. The timer triggers every second, calls the async callback (which returns a future that no-one ever waits for) and which does something which just might take longer than a second. You can convert this to a normal loop, basically: while (true) { // do something ... if (data['status'] == 'completed') { // ... break; } else { // You can choose your own delay here, doesn't have // to be the same one every time. await Future.delayed(const Duration(seconds: 1)); } } If you still want it to be timer driven, with fixed ticks, consider rewriting this as: await for (var _ in Stream.periodic(const Duration(seconds: 1))) { // do something ... // Change `timer.cancel();` to a `break;` at the end of the block. } Here you create a stream which fires an event every second. Then you use an await for loop to wait for each steam event. If the thing you do inside the loop is asynchronous (does an await) then you are even ensured that the next stream event is delayed until the loop body is done, so you won't have two fetches running at the same time. And if the code throws, the error will be caught by the surrounding try/catch, which I assume was intended, rather than being an uncaught error ending up in the Future that no-one listens to. If you want to retain the Timer.periodic code, you can, but you need to do something extra to synchronize it with the async/await code around it (which only really understands futures and streams, not timers). For example: var timerDone = Completer(); Timer.periodic(const Duration(seconds: 1), (timer) async { try { // do something ... // Add `timerDone.complete();` next to `timer.cancel()`. } catch (e, s) { timer.cancel(); timerDone.completeError(e, s); } }); await timerDone.future; This code uses a Completer to complete a future and effectively bridge the gap between timers and futures that can be awaited (one of the listed uses of Completer in the documentation). You may still risk the timer running concurrently if one step takes longer than a second. Also, you can possibly use the retry package, if it's the same thing you want to retry until it works.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,399
Q: MySQL Select rows with same id but different multiple values in another colum Maybe my question will be silly, but i'm new to Mysql I have a table with user_id and some interest. +------+------+ |userID|Inter | +------+------+ |1 |sport | +------+------+ |2 |it | +------+------+ |3 |game | +------+------+ |1 |it | +------+------+ |1 |game | +------+------+ |3 |it | +------+------+ |3 |sport | +------+------+ Amount of interests can be huge(let say 20 or whatever) Now i need to find all userId's that have interests it, game, sport; Of course simple AND wont work as because of different rows. So my question will be how to do it, so the output will be +------+ |userID| +------+ |1 | +------+ |3 | +------+ Thank you. A: You can get it by using GROUP BY and COUNT(*) create table users (user_id int, interest varchar(20)); insert into users values(1, 'sport'); insert into users values(2, 'it'); insert into users values(3, 'game'); insert into users values(1, 'it'); insert into users values(1, 'game'); insert into users values(3, 'it'); insert into users values(3, 'sport'); SELECT user_id FROM users WHERE interest IN ('game', 'it', 'sport') GROUP BY user_id HAVING count(*) = 3; | user_id | | ------: | | 1 | | 3 | dbfiddle here
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,187
The unrestricted earth fault protection uses a residually connected earth fault relay. It consists of three C.T.s, one in each phase. The secondary windings of three C.T.s are connected in parallel. The earth fault relay is connected across the secondaries which carries a residual current. The scheme is shown in the Fig. 1. Where there is no fault, under normal conditions, vector sum of the three line currents is zero. Hence the vector sum of the three secondary currents is also zero. The sum of the three currents is residual current IRs which is zero under normal conditions. The earth fault relay is connected in such a way that the residual current flows through the relay operating coil. Under normal condition, residual current is zero so relay does not carry any current and is inoperative. However in presence of earth fault condition, the balance gets disturbed and the residual current IRs is no more zero. If this current is more than the pickup value of the earth fault relay, the relay operates and opens the circuit breaker through tripping of of the trip circuit. In the scheme shown in the Fig. 1, the earth fault at any location near or away from the location of C.T.s can cause the residual current. Hence the protected zone is not definite. Such a scheme is hence called unrestricted earth fault protection.
{ "redpajama_set_name": "RedPajamaC4" }
6,209
Bishop Robert Herman Flock Holy Cross Parish Missio : United States Mission Coop Program Mite Box Lent 2020 Christmas Joy and Gratitude from Sister Carla in Peru From Sister Carla Harrison (Daughters of Our Lady of the Pieta) in Peru: We have many plans, projects, and needs at this time, but today we must listen to the heart which tells us what is most urgent. At this time of the pandemic, our greatest need is to help save the most fragile who suffer and are suffering because they are in need of oxygen (because of Covid-19). Heartfelt thanks to you! Because of your generosity, we have been able to buy and supply oxygen for the elderly and vulnerable in our retirement home, who urgently need this support. In recent months we have suffered greatly because the oxygen cylinders were not available due to the great demand and need here in Peru. Those that were available were 3 to 4 times greater the cost, but when it comes down to saving a soul, there was no questioning. Despite the cost, we were able to purchase 2 cylinders of oxygen, but when the patient is on 15 liters, not even a large cylinder lasts long, so trying to find places throughout Lima that would fill the tank meant large lines, long waits, and lots of expenses. We were also able to buy 2 oxygen concentradors, which help for not having to continuously fill the tanks and scramble to find where they are filling them…. unfortunately the consentrators only reach to 10 liters maximum (so still didn't meet the needs to increase their oxygen saturation) I have had you close in prayer and in gratitude, and still do, for allowing me to do God's work with love. May we continue to pray for and support one another during this hard time for us all. Sr. Carla Mission Outreach Spotlight – Catholic Life Published 2016 Donations for Sister Carla's mission can be sent to: Daughters of the Pieta Mission Office – Diocese of La Crosse La Crosse, WI 54602-4004 The Propagation of the Faith – a Pontifical Mission Society The Mission Office assists the faithful of the Diocese of La Crosse in carrying out Jesus' directive to his apostles before his ascension into heaven … to go and teach all nations, baptizing them in the name of the Father, the Son and the Holy Spirit. The Faith must be kept and guarded in our souls, but it must also be generously passed on to every person in the world. The Church must be missionary if she is to be the faithful spouse of Christ. Pontifical Mission Societies The Pontifical Mission Societies consist of the Society for the Propagation of the Faith, the Holy Childhood Association, the Society of St. Peter Apostle and the Missionary Union of Priests and Religious. The Pontifical Mission Societies have, as their primary purpose, the promotion of a universal missionary spirit – a spirit of prayer and sacrifice – among all baptized Catholics. Fr. Joseph Walijewski Legacy Guild Is an association of people dedicated to furthering the cause of the Servant of God, Father Joseph Walijewski, and providing continued support for the legacy he left behind. Padre Jose's spirituality is rooted in seeing Christ crucified in everyone he met; especially the orphans, the abandoned, the marginalized and the poor. Those devoted to Father Joseph Walijewski are encouraged to join the Guild and regularly say the prayer for his canonization, which is available online and on a special prayer card. Members are asked to pray for Father Walijewski's canonization, report favors received and assist in the advancement of the cause. To learn more about the life and works of Father Joe Walijewski and the cause for his Beatification and Canonization, please visit Father Joe's Guild. Mission Receipts from PARISHES of the Diocese of La Crosse 2019-2020 Contributions of YOUTH from Parishes and Schools 2019-2020 What does the Mission Office do? Coordinates the Mission Cooperation Program for visiting missionaries in Diocesan parishes. Distributes Mass stipends to retired and missionary priests. Supports priests, sisters and lay persons in the Diocesan parish in Santa Cruz, Bolivia, and the Casa Hogar Juan Pablo II Orphanage in Lurin, Peru. Coordinates La Crosse Diocesan student Lenten Mite Box collections. Promotes the annual fundraising benefit dinner for Casa Hogar Orphanage. Coordinates and promotes the annual Monsignor Anthony P. Wagener Mission Awards. Appeals made through the World Mission Sunday – for the propagation of the Gospel in mission territories. Christmas and Lenten Appeals – for those in most need in the developing world. St. Peter Apostle – education and formation of men and women who have been called to serve the Lord as a priest or member of a religious order in a developing country. For more information go to: www.onefamilyinmission.org Casa Hogar Juan Pablo II ParishSoft WMS Report Catholic Life Articles Here are a few Catholic Life articles. To read the full article, click on the blue colored title. "There will always be poor people in the land – Therefore, I command you to be openhanded toward your brothers and toward the poor and needy …" Rev. Woody Pace, Director Marga Apel, Administrative Assistant
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
582
Григорий Дмитриевич Деев-Хомяковский (настоящая фамилия — Деев, 1888—1946) — русский и советский поэт и педагог, общественный деятель. Биография Родился в крестьянской семье. Был пастухом, батраком. В 6 лет выучился грамоте у местного сапожника. В 1899 году после окончания земской школы родители отдают его в Москву в обучение сапожному делу; вскоре поступает на работу в булочную Филиппова. В 1905 году ушёл в монастырь послушником, но вынужден был вернуться в Москву, чтобы помочь родителям, терпевшим крайнюю нужду. В 1905 году принял активное участие в декабрьских событиях на Пресне, вступил в РСДРП (б). Работал в типографии и одновременно учился на Пречистенских рабочих курсах. В 1909 году сдал экзамен на учителя. С 1908 года активный участник, а затем председатель Суриковского литературно-музыкального кружка. В 1910 году поступил в Московский университет (откуда в 1912 году был исключён как «опасный элемент») и в Народный университет А. Л. Шанявского (оба университета закончил экстерном в 1917 году). Преподавал в гимназии А. Е. Флёрова историю и географию. С 1922 года по 1927 год — председатель Всероссийского общества крестьянских писателей. Автор нескольких сборников стихов, а также сборника революционных и народных песен. В 1928 году оставил литературную деятельность и вернулся к педагогике. Избранные произведения «Машина башня», 1911, «Зорька», 1917, «Борозды», 1919, «Кудель», 1926, пьеса «Молодель», 1924 (переиздана в 1927 году под названием «На Перекоп»). Примечания Литература Выпускники Императорского Московского университета Выпускники Московского городского народного университета имени А. Л. Шанявского
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,977
package com.vladsch.md.nav.flex.psi; import com.intellij.lang.ASTNode; import com.intellij.lang.Language; import com.intellij.openapi.util.TextRange; import com.intellij.psi.PsiClass; import com.intellij.psi.PsiElement; import com.intellij.psi.PsiFile; import com.intellij.psi.PsiLiteralExpression; import com.intellij.psi.PsiManager; import com.intellij.psi.PsiReference; import com.intellij.psi.impl.FakePsiElement; import com.intellij.util.IncorrectOperationException; import com.vladsch.md.nav.MdLanguage; import com.vladsch.md.nav.flex.psi.util.FlexmarkPsiImplUtils; import icons.FlexmarkIcons; import org.jetbrains.annotations.NotNull; import org.jetbrains.annotations.Nullable; import javax.swing.Icon; public class FakePsiLiteralExpression extends FakePsiElement { @SuppressWarnings("NotNullFieldNotInitialized") @NotNull PsiLiteralExpression myElement; @SuppressWarnings("NotNullFieldNotInitialized") @NotNull TextRange myTextRangeInParent; @SuppressWarnings("NotNullFieldNotInitialized") @NotNull TextRange myTextRange; public FakePsiLiteralExpression(@NotNull PsiLiteralExpression element, @NotNull TextRange textRangeInParent) { updateElement(element, textRangeInParent); } private void updateElement(@NotNull PsiLiteralExpression element, @NotNull TextRange textRangeInParent) { myElement = element; myTextRangeInParent = textRangeInParent; myTextRange = textRangeInParent.shiftRight(myElement.getTextOffset()); } @NotNull public PsiLiteralExpression getLiteralExpression() { return myElement; } @Override public PsiReference getReference() { return null; } @Nullable @Override public PsiManager getManager() { final PsiElement parent = getParent(); return parent != null ? parent.getManager() : null; } @Nullable @Override public String getPresentableText() { return getText(); } @Nullable @Override public String getLocationString() { PsiClass psiClass = FlexmarkPsiImplUtils.getElementPsiClass(myElement); return psiClass == null ? getName() : psiClass.getName(); } @Override public String getName() { return getText(); } @Override public boolean isPhysical() { return false; } @Override public PsiElement setName(@NotNull String name) throws IncorrectOperationException { return null; //// throw new IncorrectOperationException("Not supported on fake element"); // PsiExpression fromText = JavaPsiFacade.getInstance(myElement.getProject()).getElementFactory().createExpressionFromText("\"" + name + "\"", myElement.getParent()); // PsiLiteralExpression psiElement = (PsiLiteralExpression) myElement.replace(fromText); // updateElement(psiElement, new TextRange(1, psiElement.getTextLength() - 1)); // return psiElement; } @NotNull @Override public Language getLanguage() { return MdLanguage.INSTANCE; } @NotNull @Override public PsiElement[] getChildren() { return new PsiElement[0]; } @Nullable @Override public PsiElement findElementAt(int offset) { return null; } @NotNull @Override public char[] textToCharArray() { return getText().toCharArray(); } @Override public ASTNode getNode() { return myElement.getNode().getFirstChildNode(); } @Override public boolean isEquivalentTo(PsiElement another) { return another == myElement || (another instanceof FakePsiLiteralExpression && myElement == ((FakePsiLiteralExpression) another).myElement); } @Override public PsiElement getParent() { return myElement; } @Override public boolean canNavigate() { return super.canNavigate(); } @NotNull @Override public PsiElement getNavigationElement() { return this; } @Override public void navigate(boolean requestFocus) { super.navigate(requestFocus); } @Override public int getStartOffsetInParent() { return myTextRangeInParent.getStartOffset(); } @NotNull @Override public TextRange getTextRange() { return myTextRange; } @Override public int getTextLength() { return myTextRangeInParent.getLength(); } @Override public int getTextOffset() { return myTextRange.getStartOffset(); } @NotNull @Override public TextRange getTextRangeInParent() { return myTextRangeInParent; } @Override public PsiFile getContainingFile() { return myElement.getContainingFile(); } @NotNull @Override public String getText() { return myElement.getText().substring(myTextRangeInParent.getStartOffset(), myTextRangeInParent.getEndOffset()); } @Nullable @Override public Icon getIcon(final boolean open) { return FlexmarkIcons.Element.FLEXMARK_SPEC_EXAMPLE; } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; FakePsiLiteralExpression that = (FakePsiLiteralExpression) o; return myElement.equals(that.myElement); } @Override public int hashCode() { return myElement.hashCode(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,006
Q: FOR loop and SQL query I am doing a daily sale report which shows total sale in everyhour from 8AM to 10PM. The hours are easy, just simply for ($x = 8; $x <= 22; $x++) Then the total amount will be get from SQL: $sql = "SELECT DATEPART(hh,LastUpdateTime), SUM(TotalAmount) AS Total FROM Tickets WHERE DATEPART(hh,LastUpdateTime)=$x GROUP BY DATEPART(hh,LastUpdateTime)"; If that hour doesn't have any sale, the amount will be 0. So my code is: <?php include 'go.php'; for ($x = 8; $x <= 22; $x++) { $money =0; echo "['".$x.":00', ".$money."],<br>"; $sql = "SELECT DATEPART(hh,LastUpdateTime), SUM(TotalAmount) AS Total FROM Tickets WHERE DATEPART(hh,LastUpdateTime)=$x GROUP BY DATEPART(hh,LastUpdateTime)"; $stmt = sqlsrv_query( $conn, $sql ); if( $stmt === false) { die( print_r( sqlsrv_errors(), true) ); } while( $row = sqlsrv_fetch_array( $stmt, SQLSRV_FETCH_ASSOC) ) { $money = $row['Total']; echo "['".$x.":00', ".$money."],<br>"; } //echo $x."<br>"; } sqlsrv_free_stmt( $stmt); ?> But help me, the hour that has sales showed up twice (look at 13:00, 15:00 , 17:00,..) ['8:00', 0], ['9:00', 0], ['10:00', 0], ['11:00', 0], ['12:00', 0], ['13:00', 0], ['13:00', 22.05], ['14:00', 0], ['15:00', 0], ['15:00', 23.95], ['16:00', 0], ['17:00', 0], ['17:00', 47.45], ['18:00', 0], ['18:00', 71.50], ['19:00', 0], ['20:00', 0], ['21:00', 0], ['22:00', 0], How should I change my code to get perfect result like this: ['8:00', 0], ['9:00', 0], ['10:00', 0], ['11:00', 0], ['12:00', 0], ['13:00', 22.05], ['14:00', 0], ['15:00', 23.95], ['16:00', 0], ['17:00', 47.45], ['18:00', 71.50], ['19:00', 0], ['20:00', 0], ['21:00', 0], ['22:00', 0], Thank so much!!! A: You're displaying time and money twice: Try changing your code as : <?php include 'go.php'; for ($x = 8; $x <= 22; $x++) { $money =0; //Remove or comment this line // echo "['".$x.":00', ".$money."],<br>"; $sql = "SELECT DATEPART(hh,LastUpdateTime), SUM(TotalAmount) AS Total FROM Tickets WHERE DATEPART(hh,LastUpdateTime)=$x GROUP BY DATEPART(hh,LastUpdateTime)"; $stmt = sqlsrv_query( $conn, $sql ); if( $stmt === false) { die( print_r( sqlsrv_errors(), true) ); } while( $row = sqlsrv_fetch_array( $stmt, SQLSRV_FETCH_ASSOC) ) { if ($row['Total']) { $money = $row['Total']; } echo "['".$x.":00', ".$money."],<br>"; } //echo $x."<br>"; } sqlsrv_free_stmt( $stmt); ?> A: I think you need to remove your first echo statement: for ($x = 8; $x <= 22; $x++) { $money =0; //Remove this one echo "['".$x.":00', ".$money."],<br>"; ... while( $row = sqlsrv_fetch_array( $stmt, SQLSRV_FETCH_ASSOC) ) { $money = $row['Total']; //Leave this one echo "['".$x.":00', ".$money."],<br>"; } A: When you start looping this line always print echo "['".$x.":00', ".$money."],<br>"; No Matter query has record or not. You have to override this row when you query find some records. I'm Refining you query. <?php $i =0; $arr = array(); for ($x = 8; $x <= 22; $x++) { $money =0; $arr[$i] = "['".$x.":00', ".$money."],<br>"; $sql = "SELECT DATEPART(hh,LastUpdateTime), SUM(TotalAmount) AS Total FROM Tickets WHERE DATEPART(hh,LastUpdateTime)=$x GROUP BY DATEPART(hh,LastUpdateTime)"; $stmt = sqlsrv_query( $conn, $sql ); if( $stmt === false) { die( print_r( sqlsrv_errors(), true) ); } while( $row = sqlsrv_fetch_array( $stmt, SQLSRV_FETCH_ASSOC) ) { $money = $row['Total']; $arr[$i] = "['".$x.":00', ".$money."],<br>"; } //echo $x."<br>"; $i++; } echo implode(" ", $arr); sqlsrv_free_stmt( $stmt); ?> A: I figured out the problem. I changed while to if else the code works well now <?php include 'go.php'; for ($x = 8; $x <= 22; $x++) { $sql = "SELECT DATEPART(hh,LastUpdateTime), SUM(TotalAmount) AS Total FROM Tickets WHERE DATEPART(hh,LastUpdateTime)=$x GROUP BY DATEPART(hh,LastUpdateTime)"; $stmt = sqlsrv_query( $conn, $sql ); if( $stmt === false) { die( print_r( sqlsrv_errors(), true) ); } if( $row = sqlsrv_fetch_array( $stmt, SQLSRV_FETCH_ASSOC) ) { $money = $row['Total']; echo "['".$x.":00', ".$money."],<br>"; } else { $money =0; echo "['".$x.":00', ".$money."],<br>"; } //echo $x."<br>"; } sqlsrv_free_stmt( $stmt); ?> A: Need a condition like if($money == 0) and write echo "['".$x.":00', ".$money."],<br>"; into for loop below while loop. <?php include 'go.php'; for ($x = 8; $x <= 22; $x++) { $money =0; $sql = "SELECT DATEPART(hh,LastUpdateTime), SUM(TotalAmount) AS Total FROM Tickets WHERE DATEPART(hh,LastUpdateTime)=$x GROUP BY DATEPART(hh,LastUpdateTime)"; $stmt = sqlsrv_query( $conn, $sql ); if( $stmt === false) { die( print_r( sqlsrv_errors(), true) ); } while( $row = sqlsrv_fetch_array( $stmt, SQLSRV_FETCH_ASSOC) ) { $money = $row['Total']; echo "['".$x.":00', ".$money."],<br>"; } if($money == 0) echo "['".$x.":00', ".$money."],<br>"; } sqlsrv_free_stmt( $stmt); ?>
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,566
\section{Introduction} It is often of interest to calculate how much kinetic energy could be released from a given system -- that is, the system's free or available energy. There are multiple definitions of free or available energy, each corresponding to a different rule for how a distribution of particles may be rearranged. One of the simplest, due to Gardner in 1963 \cite{Gardner1963}, is that any rearrangement is permitted so long as it conserves phase space densities. These rearrangement operations are known as ``Gardner restacking." The maximum energy that can be extracted with Gardner restacking is known as the ``Gardner free energy." However, physical processes that conserve phase space densities on a microscopic scale can appear to produce diffusion when the system is viewed with finite granularity, which is often the case of practical interest. This motivates an alternative free energy defined in terms of diffusive exchange, in which the allowed operation is to average the populations of any two elements of phase space (as opposed to Gardner restacking, where the populations instead exchange position without mixing) \cite{Fisch1993, Hay2015, Hay2017, Kolmes2020ConstrainedDiffusion, Kolmes2020Gardner}. In this context, a ground state is defined as a state from which no operation can release further energy. For both Gardner restacking and diffusive exchange, the ground state is always a state in which the highest-population elements of phase space occupy the lowest-energy states. For any given initial state, Gardner restacking can lead to only one possible ground state, whereas diffusive exchange operations can lead to a spectrum of ground states (as is drawn in Figure~\ref{fig:spectrumCartoon}). The diffusively accessible free energy is defined as the energy released when the system is transformed from its initial state to the lowest-energy ground state that can be reached through diffusive operations. Calculating this free energy is therefore a search problem over the space of all accessible ground states. \begin{figure} \centering \includegraphics[width=.9\linewidth]{spectrumCartoon} \caption{Diffusive operations can often map a given initial state into any of a large number of different ground states. These accessible ground states can have a range of energies. } \label{fig:spectrumCartoon} \end{figure} The diffusively accessible free energy was originally defined in the context of alpha channeling, where waves are intentionally injected into a system in order to extract energy from a population of fusion products \cite{Fisch1992, Fisch1993}. The motivation was to determine how efficient alpha channeling (and similar strategies for intentional phase space manipulations) could possibly be. This helps to explain why the focus in the diffusive-exchange literature has always been on the \textit{lowest}-energy ground state: for the purposes of engineering phase-space transformations to release as much energy as possible, the upper limit on the achievable energy release is the most interesting thing to calculate. The Gardner restacking literature, on the other hand, is largely motivated by the physics of instabilities. This includes Gardner's original work \cite{Gardner1963} as well as much of the recent progress on the theory of restacking \cite{Helander2017ii, Helander2020, Mackenbach2022}. If an instability can be understood as drawing energy from the unstable configuration, then the amount of energy that could possibly be extracted quantifies how unstable the system could be, without recourse to the dynamics of the particular instabilities in question. Recent work suggests that the Gardner free energy can sometimes provide powerful predictions for turbulent energy fluxes \cite{Mackenbach2022}. This difference in focus is largely historical rather than having anything to do with the underlying physics of these different transformations. However, if one wishes to use the theory of diffusive exchange operations to understand instabilities, then it becomes desirable to understand the rest of the spectrum of ground states pictured in Figure~\ref{fig:spectrumCartoon}, not just the lowest-energy state. After all, a natural instability will not necessarily pick the optimal sequence of phase space mixing operations; in general it may drive the system to any of the accessible ground states. This paper takes the first step toward understanding the rest of that spectrum by introducing the concept of the minimum stabilizing energy release -- that is, the identification of the \textit{highest}-energy ground state that can be reached through mixing operations. For an experimentalist hoping to avoid detrimental instabilities, this represents the best-case scenario: the smallest energy release that can stabilize the system. More importantly, when taken together with the (maximum) diffusively accessible free energy, the minimum stabilizing energy release quantifies the range of possible outcomes that can be achieved through mixing operations. These formulations of the available-energy problems come from the plasma physics literature, and are connected with a number of other ideas about stability and accessibility within that literature \cite{Taylor1963, Taylor1964, Morrison1989, Morrison1998}. However, these considerations are much more broadly relevant. Gardner restacking is closely related to ideas that appear in astrophysics \cite{Berk1970, Bartholomew1971, Lemou2012, Chavanis2012}, statistical mechanics \cite{Baldovin2016}, and mathematics \cite{Riesz1930, HardyLittlewoodPolya, Brascamp1974, Almgren1989, Baernstein}. The discrete diffusive exchange problem can be found (under other names) in the literature on physical chemistry \cite{Zylka1985}, income inequality \cite{Dalton1920, Atkinson1970, Aboudi2010}, and altruism \cite{Thon2004}. All of these formulations generally approach the problem of determining the set of states that can be reached under a particular set of operations. This more general problem appears, for example, in meteorology \cite{Lorenz1955}; chemistry \cite{Horn1964}; laser absorption \cite{Levy2014}; and quantum information theory and thermodynamics \cite{Gorban, Gorban2013, Lostaglio2015, Brandao2015, Korzekwa2019, Lostaglio2022}. This paper is organized as follows. Section~\ref{sec:definition} defines the minimum stabilizing energy release for discrete and continuous phase spaces. Section~\ref{sec:threeBox} explicitly calculates the minimum stabilizing release for a three-state discrete system, and describes how the problem differs from that of calculating the maximum energy release (that is, the minimume-energy accessible ground state). Section~\ref{sec:continuous} discusses the minimum stabilizing energy release for continuous phase space. It shows that the quasilinear plateau is the maximum-energy accessible ground state for a bump-on-tail distribution, and that this theory provides a natural generalization of the quasilinear plateau for more general curves. Section~\ref{sec:conclusion} discusses these results. Appendix~\ref{appendix:threeBox} describes explicit solutions for the minimum-energy accessible ground states for two- and three-state discrete systems, as well as the corresponding Gardner restacking ground states. \section{Defining the Minimum Stabilizing Energy Release} \label{sec:definition} All of the aforementioned concepts of available energy can be defined for either discrete or continuous phase space. For the purposes of building intuition, it is often helpful to start with the discrete case. One can think of a discrete phase space as being a coarse-grained average over a continuous space. Alternatively, one can think of a discrete phase space as corresponding to a fundamentally discrete system (such as an atomic system with some discrete set of energy levels). A discrete system with $N$ states is specified by the energies $\{ \varepsilon_i \}$ and current populations $\{ n_i \}$ of those states; the total energy can be written as \begin{gather} W = \sum_{i=1}^N \varepsilon_i n_i. \end{gather} It is convenient to assume without loss of generality that $\epsilon_i \leq \epsilon_j$ $\forall i < j$, so that the system is in a ground state if and only if $n_i \leq n_j$ $\forall i < j$. A Gardner restacking operation consists of exchanging $n_i$ and $n_j$. A diffusive exchange operation consists of sending both $n_i$ and $n_j$ to $(n_i + n_j) / 2$. In the original formulation of the minimum-energy ground state problem, there was no further restriction on the allowed operations. However, when considering the maximum-energy ground state problem, it is necessary also to impose that an averaging operation should only be allowed if it does not increase the total energy. The disallowed operations, which effectively inject energy into the system, are sometimes called annealing operations. Annealing operations must be prohibited because, if they are allowed, the problem becomes both trivial and unphysical. It becomes trivial because the solution is always the same: every element of phase space is repeatedly averaged against every other element until all populations are equal. This outcome is unphysical; in the limit of large $N$, and in the continuous limit, it can involve an arbitrarily large increase in energy. In the continuous analog (which is described more fully below), this would correspond to a uniform distribution over the entire domain of velocity space. Moreover, these annealing operations are intrinsically not in line with how we typically expect instabilities to behave. Annealing operations were not prohibited in the original formulation of the minimum-energy ground state problem, but Hay, Schiff, and Fisch showed \cite{Hay2015} that the minimum-energy accessible ground state is the same with or without these operations. For a continuous phase space, the corresponding free energies can largely be understood in terms of the large-$N$ limit of the discrete problem. In the case of Gardner restacking, the continuous problem \cite{Dodin2005} is equivalent to calculating the ``symmetric decreasing rearrangement" discussed in the mathematics literature \cite{Riesz1930, HardyLittlewoodPolya, Brascamp1974, Almgren1989, Baernstein}. The continuous diffusive problem can be presented as an optimization problem on the energy \begin{gather} W_\text{final} = \lim_{t \rightarrow \infty} \int \varepsilon(v) f(v,t) \, \mathrm{d} v \end{gather} for a distribution $f(v,t)$ that evolves in time through the non-local mixing process \begin{gather} \frac{\partial f}{\partial t} = \int K(v,v',t) \big[ f(v',t) - f(v,t) \big] \, \mathrm{d} v'. \end{gather} There is no requirement that the mixing be local because microscopically local flows can result in non-local mixing on larger scales \cite{Fisch1993}. The minimum-energy ground state problem is to find the kernel $K(v,v',t)$ that minimizes $W_\text{final}$, with the requirements that $K(v,v',t) = K(v',v,t)$ and $K(v,v',t) \geq 0$. The maximum-energy ground state problem is to instead maximize $W_\text{final}$, with the added constraints that $K(v,v',t)$ can only be nonzero when $\varepsilon(v) - \varepsilon(v')$ and $f(v,t) - f(v',t)$ have the same sign (no annealing) and that the final state must be a ground state. The space of allowed kernels $K(v,v',t)$ is large, and direct searches over this space are difficult. However, it was recently shown (surprisingly enough) that the minimum-energy diffusively accessible ground state energy for a continuous system is identical to the energy accessible through Gardner restacking \cite{Kolmes2020Gardner}. This is to be contrasted with discrete systems, in which the Gardner free energy always exceeds the energy accessible through diffusive exchange (with the exception of the case in which the system starts in a ground state and there is no free energy of either kind). \section{$N=3$ Discrete Case} \label{sec:threeBox} \begin{figure*} \centering \includegraphics[width=.48\linewidth]{regions} \includegraphics[width=.48\linewidth]{trajectories} \caption{Left: the regions of state space corresponding to the six possible orderings of the three populations. Right: The allowable non-annealing trajectories through state space at each point.} \label{fig:stateSpace} \end{figure*} In Refs.~\cite{Hay2015} and \cite{Hay2017}, Hay, Schiff and Fisch approached the problem of calculating the \textit{maximum} accessible free energy in discrete systems -- that is, identifying the minimum-energy accessible ground state. In particular, Ref.~\cite{Hay2015} describes five primary findings for the $N = 3$ discrete system. To briefly paraphrase (and using numbering to match the original paper): \begin{enumerate} \item For any given initial population values, it is possible to identify a finite number of accessible states whose associated energies could be extremal. In order to calculate the maximum accessible free energy, it suffices to identify these states and find the lowest-energy state among them. \item Candidates for the extremal states are always reachable within $N \text{ choose } 2$ averaging operations (that is, for $N = 3$, 3 operations). \item For any given initial conditions, there are ultimately seven candidates among the accessible states which may be extremal (including the initial state). Depending on the energy values assigned to each of the three states, any of these seven can be extremal. \item Allowing partial relaxation operations (partial mixing, as opposed to full averaging of a pair of populations) does not change the maximal energy that can be extracted from the system. \item Allowing steps that increase the energy instead of decreasing it (so-called annealing operations) does not change the maximal energy that can be extracted. \end{enumerate} As it turns out, only result $4$ continues to hold when considering the problem of identifying the maximum-energy accessible ground state rather than the minimum-energy state. In some ways, this might seem surprising. Hay, Schiff, and Fisch's results were formulated in terms of the extremal accessible energies, not necessarily the minimum-energy states. There are two things which prevent most of their results from being directly applicable to the maximum-energy ground state problem. First: the highest-energy accessible state is not, in general, a ground state. As a result, the maximum-energy accessible \textit{ground} state is very often not one of the seven extremal states identified in Ref.~\cite{Hay2015}. Second: Hay, Schiff, and Fisch described how to calculate the set of states that are accessible when annealing operations are allowed. This made sense in the paper's original context, since, as they showed, annealing operations are never needed to reach the lowest-energy states. However, as discussed in Section~\ref{sec:definition}, it is not physically appropriate to allow annealing operations for the maximum-energy ground state problem, so the space of states to search for the maximum-energy ground state should be more restrictive than the solution space described in Ref.~\cite{Hay2015}. With that in mind, consider the question of identifying the maximum-energy accessible ground state when $N = 3$. In fact, it is possible to find a fairly compact solution to this problem by considering which operations are allowed for which starting states. This is probably easiest to understand graphically. Figure~\ref{fig:stateSpace} shows the space of possible populations $(n_1, n_2, n_3)$. As was noted in Ref.~\cite{Hay2015}, it is possible to represent this as a two-dimensional space by picking a normalization such that $n_1 + n_2 + n_3 = 1$ (in which case $n_3$ can be determined from the values of $n_1$ and $n_2$). Depending on the relative ordering of $n_1$, $n_2$, and $n_3$, different averaging operations are allowed in different regions of state space (according to the requirement that each averaging operation must decrease the energy of the system). The different orderings are shown in the left panel of Figure~\ref{fig:stateSpace}; the allowed trajectories for each region are shown in the right panel. An averaging operation consists of following one of the indicated trajectories to the $n_1 = n_2$, $n_2 = n_3$, or $n_1 = n_3$ line, depending on the averaging operation. The maximum-energy ground states can be read off of Figure~\ref{fig:stateSpace} region-by-region. If the initial state has $n_1 \geq n_2 \geq n_3$, then the system is already in a ground state, and the problem is trivial. If the initial state has $n_2 \geq n_1 \geq n_3$, then the only allowed operation is to average the first and second populations. The immediately results in the (only) accessible ground state: \begin{gather*} \def1.2} \begin{array}{|c|c|c|{1.2} \begin{array}{|c|c|c|} \hline n_1 & n_2 & n_3\\ \hline \end{array} \rightarrow \begin{array}{|c|c|c|} \hline \frac{1}{2}(n_1+n_2) & \frac{1}{2}(n_1 + n_2) & n_3\\ \hline \end{array} \; . \end{gather*} Similarly, if $n_1 \geq n_3 \geq n_2$, then the only allowed operation is to average the second and third populations, which again immediately brings the system to a ground state: \begin{gather*} \def1.2} \begin{array}{|c|c|c|{1.2} \begin{array}{|c|c|c|} \hline n_1 & n_2 & n_3\\ \hline \end{array} \rightarrow \begin{array}{|c|c|c|} \hline n_1 & \frac{1}{2}(n_2+n_3) & \frac{1}{2}(n_2+n_3) \\ \hline \end{array} \; . \end{gather*} If $n_3 \geq n_2 \geq n_1$, then it is always possible to reach the ground state $(1/3, 1/3, 1/3)$, or at least to approach it arbitrarily closely. This can be done by alternating between averaging the populations of states 1 and 2 and averaging the populations of states 2 and 3. These two averaging operations, performed one after the other $k$ times, is equivalent to the mapping \begin{gather} \begin{pmatrix} n_1 \\ n_2 \\ n_3 \end{pmatrix} \rightarrow \mathcal{A}^k \begin{pmatrix} n_1 \\ n_2 \\ n_3 \end{pmatrix} \end{gather} with \begin{gather} \mathcal{A} \doteq \begin{pmatrix} 1/2 & 1/2 & 0 \\ 1/4 & 1/4 & 1/2 \\ 1/4 & 1/4 & 1/2 \end{pmatrix} , \end{gather} and for any $k \in \mathbb{N}$ it can be shown that \begin{gather} \mathcal{A}^k = \frac{1}{3} \begin{pmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{pmatrix} + \frac{4^{-k}}{3} \begin{pmatrix} 2 & 2 & -4 \\ -1 & -1 & 2 \\ -1 & -1 & 2 \end{pmatrix} . \end{gather} As such, the system eventually converges to $(1/3, 1/3, 1/3)$ as $k \rightarrow \infty$. Moreover, each of these operations releases energy. To see this, note that if the system starts with $n_1 \leq n_2 \leq n_3$, averaging either the first and second or the second and third populations must release energy (or do nothing), and that neither of these operations will change the ordering of the three states' populations. This leaves two remaining cases: $n_2 > n_3 > n_1$ and $n_3 > n_1 > n_2$. Consider the former of these two. If $n_2 > n_3 > n_1$ and $(n_1 + n_2) / 2 \leq n_3$, then averaging the first and second populations gets the system to the boundary of the region discussed in the previous case, in which an alternating sequence of averaging operations between the first and second and second and third populations leads the system arbitrarily close to $(1/3,1/3,1/3)$. This is the highest-energy possible ground state, so it must be the optimal choice. On the other hand, if $(n_1 + n_2) / 2 > n_3$, then there are two possible allowed sequences of moves. Either the first and second can be averaged, leading to a ground state: \begin{gather*} \def1.2} \begin{array}{|c|c|c|{1.2} \begin{array}{|c|c|c|} \hline n_1 & n_2 & n_3\\ \hline \end{array} \rightarrow \begin{array}{|c|c|c|} \hline \frac{1}{2}(n_1+n_2) & \frac{1}{2}(n_1 + n_2) & n_3\\ \hline \end{array} \end{gather*} or the first and third can be averaged, after which the only allowed operation is to average the first and second, leading to a ground state: \begin{align*} \def1.2} \begin{array}{|c|c|c|{1.2} &\begin{array}{|c|c|c|} \hline n_1 & n_2 & n_3\\ \hline \end{array} \\ &\hspace{5 pt}\rightarrow \def1.2} \begin{array}{|c|c|c|{1.2} \begin{array}{|c|c|c|} \hline \frac{1}{2}(n_1+n_3) & n_2 & \frac{1}{2}(n_1 + n_3) \\ \hline \end{array} \\ &\hspace{5 pt}\rightarrow \def1.2} \begin{array}{|c|c|c|{1.2} \begin{array}{|c|c|c|} \hline \frac{1}{4} n_1 + \frac{1}{2} n_2 + \frac{1}{4} n_3 & \frac{1}{4} n_1 + \frac{1}{2} n_2 + \frac{1}{4} n_3 & \frac{1}{2} n_1 + \frac{1}{2} n_3 \\ \hline \end{array} \; . \end{align*} The first of these two possible sequence always leads to a higher final energy, so it is the optimal choice. Intuitively, one can see this from Figure~\ref{fig:stateSpace}: moving horizontally in state space before moving diagonally down leads to a final ground state with a lower energy than would be reached by moving diagonally down from the initial position. The argument for the final region of initial state-space, in which $n_3 > n_1 > n_2$, is essentially the same. If $(n_2 + n_3) / 2 \geq n_1$, then averaging the second and third populations leads to the boundary of the region in which the highest-energy ground state, $(1/3, 1/3, 1/3)$, is reachable. If, on the other hand, $(n_2 + n_3) / 2 < n_1$, then it turns out always to be favorable to average the second and third populations, which immediately leads to a ground state: \begin{gather*} \def1.2} \begin{array}{|c|c|c|{1.2} \begin{array}{|c|c|c|} \hline n_1 & n_2 & n_3\\ \hline \end{array} \rightarrow \begin{array}{|c|c|c|} \hline n_1 & \frac{1}{2}(n_2+n_3) & \frac{1}{2}(n_2 + n_3) \\ \hline \end{array} \; . \end{gather*} This is sufficient to specify, for any given initial state, the sequence of allowed operations that leads to the highest-energy possible ground state. At this point, it is possible to see in which ways the maximum-energy ground state problem differs from the minimum-energy ground state problem. Consider the five conclusions on the latter problem in Ref.~\cite{Hay2015}, listed at the beginning of this section. The first and third appear not to apply in the same way to the maximum-energy ground state problem; rather than identifying a finite set of candidate sequences and checking each, the solution presented here simply specifies directly which trajectory through state space is optimal, depending on the initial populations. However, although this was not the approach taken by Hay, Schiff, and Fisch, this kind of explicit case-by-case solution is also possible for the minimum-energy ground state problem. This is described in Appendix~\ref{appendix:threeBox}. The second conclusion from Ref.~\cite{Hay2015} (that the optimal ground state is always accessible within three operations) is entirely untrue for the maximum-energy ground state problem. In cases where the highest-energy accessible ground state is $(1/3,1/3,1/3)$, this optimal state is sometimes accessible only in the limit of an infinite number of operations. For example, the initial state $(0,1/4,3/4)$ can only lead to populations of the form $A/2^B$ for positive integers $A$ and $B$ for any finite number of steps; therefore it cannot reach $(1/3,1/3,1/3)$ in finite steps, but it is shown above that it can approach that ground state arbitrarily closely. The fifth conclusion (that allowing or prohibiting annealing operations does not change the optimal accessible state) also does not continue to hold for the present problem; this is discussed in Section~\ref{sec:definition}. The remaining, fourth conclusion -- that the optimal ground state is the same whether or not partial mixing operations are allowed -- is the only one that continues to hold for the $N=3$ maximum-energy ground state problem. ``Partial relaxation" refers to any operation of the form \begin{gather} n_i \rightarrow (1-\gamma) n_i + \gamma n_j \\ n_j \rightarrow \gamma n_i + (1-\gamma) n_j \end{gather} for $0 < \gamma < 1/2$ (rather than ``full mixing," where $\gamma = 1/2$). It is easiest to see this by inspecting the trajectories in Figure~\ref{fig:stateSpace}. A partial mixing operation would still have to follow one of the marked trajectories, but unlike a full mixing operation, it would not have to follow a given trajectory line to one of the $n_i = n_j$ lines. For initial conditions where $n_2 > n_1 > n_3$ and $n_1 > n_3 > n_2$, there is only one allowed pair of populations to average anyway. For initial conditions where $n_3 > n_2 > n_1$, full mixing operations can already reach the highest-possible-energy ground state $(1/3,1/3,1/3)$, so it is clear that no other operations could do better. The cases in which $n_2 > n_3 > n_1$ or $n_1 > n_3 > n_2$ are less trivial, but still clear from the figure: the operations which average the first and third populations are never favorable, whether they are complete or partial, so the optimal first move is always to fully mix the first and second populations (if $n_2 > n_3 > n_1$) or the second and third (if $n_1 > n_3 > n_2$). Some of these conclusions (for instance, the role of annealing operations) hold for all $N$. Others (like the effects of partial mixing) seem likely to continue to hold when $N > 3$, but we have not proved them here for general $N$. Note that there is no case in which reaching the optimal ground state requires mixing the first and third populations. The optimal sequences only ever require that neighboring states be mixed. We have proven this for the three-state discrete case, but we conjecture that it is true for all $N$. \section{Continuous Example: The Bump-on-Tail Distribution} \label{sec:continuous} Consider a bump-on-tail distribution $f(v)$. $f(v)$ is monotonically decreasing until it hits a local minimum, then monotonically increasing until it arrives at a local maximum, and thenceforth monotonically decreasing. We will assume that $f(v=0)$ exceeds this local maximum, and that the global minimum happens as $v \rightarrow \infty$; these assumptions are not necessary for what follows, but they are convenient. Let the energy of a particle with velocity $v$ be given by $\varepsilon(v) = m v^2 / 2$ for some mass $m$. The ``quasilinear plateau" is constructed by finding velocities $v_1$ and $v_2$ such that we can construct a flattened function $\bar{f}(v)$ as follows: \begin{gather} \bar{f}(v) \doteq \begin{cases} f(v) & v < v_1 \text{ or } v > v_2 \\ h & v_1 \leq v \leq v_2, \end{cases} \end{gather} with \begin{gather} h \doteq \frac{1}{v_2 - v_1} \int_{v_1}^{v_2} f(v) \, \mathrm{d} v . \end{gather} For a bump-on-tail distribution, $h$ is chosen as the unique value for which $f(v_1) = f(v_2) = h$. This section will demonstrate that $\bar{f}(v)$ is the maximum-energy accessible ground state for the bump-on-tail distribution. In addition to $v_1$ and $v_2$, there are two other important values of $v$ to note: first, $v_0$, the minimal value of $v$ at which $f(v_0)$ attains the same value as the bump's local maximum of $f$; and second, $v_3$, the maximal value of $v$ at which $f(v_3)$ attains the same value s the bump's local minimum of $f$. Note that for this starting distribution, $v_0 < v_1 < v_2 < v_3$. The plateau distribution $\bar{f}(v)$ is accessible through diffusive operations. One can pair intervals within $[v_1, v_2]$ over which $f(v) > h$ with those by which $f(v) < h$ and successively exchanging particles between them until they converge to $f(v) = h$. This would not require annealing, since the intervals within $[v_1, v_2]$ for which $f(v) > h$ all occur at lower $v$ than those for which $f(v) < h$. There can be no exchange involving $v < v_0$ or $v > v_3$; any exchange involving these intervals would require annealing. Moreover, it is clear that within $[v_1, v_2]$, it is impossible to do better than a flat distribution. As such, the only scenario in which one might imagine accessing a higher-energy ground state than the plateau is if there were some exchanges involving the intervals $[v_0,v_1]$ or $[v_2,v_3]$. Any non-annealing exchange involving $[v_0, v_1]$ must transfer population into this region. Any non-annealing exchange involving $[v_2, v_3]$ must transfer population out of this region. This follows from the fact that $f(v)$ is monotonically decreasing for $v < v_1$ and $v > v_2$. Therefore, a higher-energy ground state would have to involve exchanges that move some total population $F_L \geq 0$ to $[v_0, v_1]$ from $[v_1,v_2]$ and some total population $F_R \geq 0$ to $[v_1, v_2]$ from $[v_2, v_3]$ (with at least one of $F_L$ and $F_R$ being nonzero). The resulting ground state would have an energy bounded above by the case in which $f(v)$ could still be flattened between $v_1$ and $v_2$. But then any $F_L > 0$ must lower the distribution's total energy in $[v_1, v_2]$ by more than it increased the energy in $[v_0, v_1]$, and any $F_R > 0$ must lower the distribution's energy in $[v_2, v_3]$ by more than it increased the energy in $[v_1, v_2]$. In other words, the exchanges involving the regions $[v_0, v_1]$ and $[v_2, v_3]$ never lead to a ground state with an energy higher than that of the quasilinear plateau. \begin{figure} \centering \includegraphics[width=.99\linewidth]{twoBumpCartoon} \caption{This cartoon shows how a starting distribution with more than one bump on its tail can lead to multiple ``plateau-like" ground states, if the bumps are not sufficiently separated (in particular, if the local minimum of $f$ associated with one bump does not exceed the local maximum associated with the other). The one-bump case is the paradigmatic ``bump-on-tail" distribution with the classic, textbook plateau solution that is adjusted in height to conserve particles by matching the area below the plateau line with that above it. } \label{fig:twoBumpCartoon} \end{figure} This is enough to determine the maximum-energy accessible ground state for one class of distribution functions (albeit an important one). A logical next case to consider is a distribution with multiple bumps on its tail. The generalization is straightforward in cases where the two bumps are sufficiently separated. In particular, if the local minimum of $f(v)$ for the lower-$v$ bump exceeds the local maximum of $f(v)$ for the higher-$v$ bump, then the two plateau regions cannot interact without annealing operations and there is a unique two-plateau solution which must be the maximum-energy accessible ground state. Things are more complicated if the two bumps are not separated in this way. This is illustrated in Figure~\ref{fig:twoBumpCartoon}; it is possible for there to be multiple ``plateau-like" ground states. If it were true that the maximum-energy ground state is always plateau-like, then this means that computing the maximum-energy ground state would still be a nontrivial search problem over some set of plateau-like candidate distributions. We conjecture that the maximum-energy ground state is, in fact, plateau-like for any sufficiently well-behaved smooth function, but we have not proved that this must be the case. If so, then it would follow that continuous maximum-energy ground states are accessible through local diffusion. \section{Conclusion} \label{sec:conclusion} Diffusive operations are known to be able to map a given initial system to a spectrum of different ground states. Previous work has always focused on bounding the final energy of that spectrum from below -- that is, determining the maximum energy that could be released \cite{Fisch1992, Hay2015, Hay2017, Kolmes2020ConstrainedDiffusion, Kolmes2020Gardner}. This paper has argued that the upper bound of the ground-state energy spectrum (which sets the lower bound for how much energy could be released) is comparably important, especially for the purposes of understanding uncontrolled instabilities. Moreover, this paper has identified the maximum-energy ground state in certain cases. In a discrete phase space with $N = 3$ elements, the maximum-energy ground state can be determined by carefully considering the allowable mixing operations at every point in state space. In a continuous phase space, it turns out that the quasilinear plateau is the maximum-energy diffusively accessible ground state for the bump-on-tail distribution. This means that the maximum-energy ground state can be understood as a natural generalization of the quasilinear plateau for general distributions. The quasilinear plateau is a paradigm of recurring interest in this literature because it is the best-known and most intuitive example of what a diffusively accessible ground state could look like; here we identify precisely where on the spectrum of ground state energies it falls and find that it is extremal. These examples lead us to make two conjectures: \begin{enumerate} \item For a large class of smooth initial distributions, the maximum-energy ground states are plateau-like, in the sense that they consist of segments of distribution that are fully flattened and segments which are not modified from their initial forms. \item In both the discrete and continuous cases, only local mixing operations are necessary in order to reach the maximum-energy ground states. \end{enumerate} It would certainly be interesting to know whether either of these conjectures are true, but neither of these conjectures is proved here. However, based upon known examples, neither is disproved and both appear to be plausible. More generally, this paper serves to introduce a previously overlooked problem that helps to characterize the entire spectrum of possible ground state energies, rather than focusing on their lower bound alone. \acknowledgements This work was supported by US DOE DE-SC0016072. \providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
{ "redpajama_set_name": "RedPajamaArXiv" }
8,493
\section{Introduction} This paper considers stability and network capacity in discrete time queueing systems. These issues arise in the analysis and control of modern data networks, including wireless and ad-hoc mobile networks, and are also important in many other application areas. We have found that many researchers have questions about the relationships between the different types of stability that can be used for network analysis. This paper is written to address those questions by providing details on stability that are mentioned in other papers but are not proven due to lack of space. We consider the four most common types of stability from the literature: rate stability, mean rate stability, steady state stability, and strong stability. We first show that, under mild technical assumptions, strong stability implies the other three, and hence can be viewed as the strongest definition among the four. Conversely, we show that mean rate stability is the weakest definition, in that (under mild technical assumptions) it is implied by the other three. We also briefly describe additional stability definitions, such as existence of a steady state workload distribution as in \cite{baccelli-book}\cite{queue-mazumdar}\cite{bertsekas-data-nets} (often analyzed with Markov chain theory and Lyapunov drift theory \cite{asmussen-prob}\cite{foss-stability}\cite{kumar-meyn-stability}\cite{now} and/or fluid models \cite{dai-fluid}\cite{dai-balaji-lyap}), and discuss their relationships to the main four. We then consider control for a general stochastic multi-queue network. The network operates in discrete time with timeslots $t \in\{0, 1, 2, \ldots\}$. Control actions are made on each slot in reaction to random network events, such as random traffic arrivals or channel conditions. The control actions and network events affect arrivals and service in the queues, and also generate a vector of \emph{network attributes}, being additional penalties or rewards associated with the network (such as power expenditures, packet drops, etc.). We assume the system satisfies the mild technical assumptions needed for the above stability results. We further assume that network events have a \emph{decaying memory property}, a property typically exhibited by finite state ergodic systems as well as more general systems. As in \cite{now}, we define the \emph{network capacity region} $\Lambda$ as the closure of the set of all traffic rate vectors that can be supported subject to network stability and to an additional set of time average attribute constraints. We show that if traffic rates are outside of the set $\Lambda$, then under any algorithm there must be at least one queue that is not mean rate stable. Because mean rate stability is the \emph{weakest} definition, it follows that the network cannot be stable under any of the four definitions if traffic rates are outside of $\Lambda$. Conversely, we show that if the traffic rate vector is an interior point of the set $\Lambda$, then it is possible to design an algorithm that makes all queues strongly stable (and hence it is also possible to achieve stability for the other three stability definitions). Because the capacity region is defined as a closure, it follows that it is invariant under any of these four stability definitions. As an example, consider a simple discrete time $GI/GI/1$ queue with fixed size packets and arrivals $a(t)$ that are i.i.d. over slots with $\expect{a(t)} = \lambda$ packets/slot, and independent time varying service rates $b(t)$ that are i.i.d. over slots with $\expect{b(t)} = 1/2$ packets/slot. The ``mild technical assumptions'' that we impose here are that the second moments of the $a(t)$ and $b(t)$ processes are finite. In this setting, the capacity region is the set of all arrival rates $\lambda$ such that $0 \leq \lambda \leq 1/2$. It turns out that the queue is rate stable (and mean rate stable) if and only if $\lambda \leq 1/2$. However, for steady state stability and strong stability we typically require $\lambda < 1/2$ (with an exception in certain deterministic special cases). Thus, the set of all rates for which the queue is stable differs only by one point (the point $\lambda = 1/2$) under the four different definitions of stability, and the closure of this set is identical for all four. There are indeed alternative (problematic) definitions of ``stability'' that would give rise to a different capacity region, although these are typically not used for networks and are not considered here (see an example in \cite{neely-thesis} of a problematic definition that says a queue is ``empty-stable'' if it empties infinitely often, and see Section \ref{section:problematic} for another problematic example). The above 1-queue example considers $a(t)$ and $b(t)$ processes that are i.i.d. over slots. However, our stability and capacity region analysis is more general and is presented in terms of processes that are possibly non-i.i.d. over slots, assuming only that they have well defined time averages with a mild ``decaying memory'' property. We show that the capacity region is achievable using a strong drift-plus-penalty method that we derived in previous papers \cite{neely-thesis}\cite{now}\cite{neely-energy-it}\cite{neely-fairness-infocom05}. This method treats joint stability and performance optimization, and extends the pioneering work on network stability in \cite{tass-radio-nets}\cite{tass-server-allocation}. These results are easier to derive in a context when network arrival and channel vectors are i.i.d. over slots. The prior work on network stability in \cite{tass-one-hop} treats general Markov-modulated channels. Arrivals and channels with a ``decaying memory property'' are treated for stability in \cite{neely-power-network-jsac}\cite{neely-thesis}. Joint stability and utility optimization are considered for non-i.i.d. models in \cite{rahul-cognitive-tmc}\cite{longbo-profit-allerton07}\cite{neely-maximal-bursty-ton}\cite{neely-mesh} for different types of networks. This paper provides the full details for the non-i.i.d. case of joint stability and utility optimization for general networks with general time average constraints. A more general ``universal scheduling'' model is treated in \cite{neely-universal-scheduling}, which uses sample path analysis and considers possibly non-ergodic systems with no probability model, although the fundamental concept of a ``capacity region'' does not make sense in such a context. \section{Queues} Let $Q(t)$ represent the contents of a single discrete time queueing system defined over integer timeslots $t \in \{0, 1, 2, \ldots\}$. Specifically, the initial state $Q(0)$ is assumed to be a non-negative real valued random variable. Future states are driven by stochastic arrival and server processes $a(t)$ and $b(t)$ according to the following dynamic equation: \begin{equation} \label{eq:q-dynamics} Q(t+1) = \max[Q(t) - b(t), 0] + a(t) \: \: \mbox{ for $t \in \{0, 1, 2, \ldots\}$} \end{equation} We call $Q(t)$ the \emph{backlog} on slot $t$, as it can represent an amount of work that needs to be done. The stochastic processes $\{a(t)\}_{t=0}^{\infty}$ and $\{b(t)\}_{t=0}^{\infty}$ are sequences of real valued random variables defined over slots $t \in \{0, 1, 2, \ldots\}$. The value of $a(t)$ represents the amount of new work that arrives on slot $t$, and is assumed to be non-negative. The value of $b(t)$ represents the amount of work the server of the queue can process on slot $t$. For most physical queueing systems, $b(t)$ is assumed to be non-negative, although it is sometimes convenient to allow $b(t)$ to take negative values. This is useful for the \emph{virtual queues} defined in future sections, where $b(t)$ can be interpreted as a (possibly negative) attribute.\footnote{Assuming that the $b(t)$ value in (\ref{eq:q-dynamics}) is possibly negative also allows treatment of modified queueing models that place new arrivals inside the $\max[\cdot, 0]$ operator. For example, a queue with dynamics $\hat{Q}(t+1) = \max[\hat{Q}(t) - \beta(t) + \alpha(t), 0]$ is the same as (\ref{eq:q-dynamics}) with $a(t) = 0$ and $b(t) = \beta(t) - \alpha(t)$ for all $t$. Leaving $a(t)$ outside the $\max[\cdot, 0]$ is crucial for treatment of multi-hop networks, where $a(t)$ can be a sum of exogenous and endogenous arrivals.} Because we assume $Q(0) \geq 0$ and $a(t) \geq 0$ for all slots $t$, it is clear from (\ref{eq:q-dynamics}) that $Q(t) \geq 0$ for all slots $t$. The units of $Q(t)$, $a(t)$, and $b(t)$ depend on the context of the system. For example, in a communication system with fixed size data units, these quantities might be integers with units of \emph{packets}. Alternatively, they might be real numbers with units of \emph{bits}, \emph{kilobits}, or some other unit of unfinished work relevant to the system. We can equivalently re-write the dynamics (\ref{eq:q-dynamics}) without the non-linear $\max[\cdot, 0]$ operator as follows: \begin{equation} \label{eq:q-dynamics-tilde} Q(t+1) = Q(t) - \tilde{b}(t) + a(t) \: \: \mbox{ for $t \in \{0, 1, 2, \ldots\}$} \end{equation} where $\tilde{b}(t)$ is the actual work processed on slot $t$ (which may be less than the offered amount $b(t)$ if there is little or no backlog in the system at slot $t$). Specifically, $\tilde{b}(t)$ is mathematically defined: \[ \tilde{b}(t) \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} \min[b(t), Q(t)] \] Note by definition that $\tilde{b}(t) \leq b(t)$ for all $t$. The dynamic equation (\ref{eq:q-dynamics-tilde}) yields a simple but important property for all sample paths, described in the following lemma. \begin{lem} (Sample Path Property) For any discrete time queueing system described by (\ref{eq:q-dynamics}), and for any two slots $t_1$ and $t_2$ such that $0 \leq t_1 < t_2$, we have: \begin{eqnarray} Q(t_2) - Q(t_1) &=& \sum_{\tau=t_1}^{t_2-1} a(\tau) - \sum_{\tau=t_1}^{t_2-1} \tilde{b}(\tau) \label{eq:io-tilde} \\ Q(t_2) - Q(t_1) &\geq& \sum_{\tau=t_1}^{t_2-1} a(\tau) - \sum_{\tau=t_1}^{t_2-1} b(\tau) \label{eq:io-b2} \end{eqnarray} Therefore, for any $t>0$ we have: \begin{eqnarray} \frac{Q(t)}{t} - \frac{Q(0)}{t} &=& \frac{1}{t} \sum_{\tau=0}^{t-1} a(\tau) - \frac{1}{t}\sum_{\tau=0}^{t-1} \tilde{b}(\tau) \label{eq:illuminate} \\ \frac{Q(t)}{t} - \frac{Q(0)}{t} &\geq& \frac{1}{t} \sum_{\tau=0}^{t-1} a(\tau) - \frac{1}{t}\sum_{\tau=0}^{t-1} b(\tau) \label{eq:illuminate2} \end{eqnarray} \end{lem} \begin{proof} By (\ref{eq:q-dynamics-tilde}) we have for any slot $\tau\geq 0$: \[ Q(\tau+1) - Q(\tau) = a(\tau) - \tilde{b}(\tau) \] Summing the above over $\tau \in \{t_1, \ldots, t_2 -1\}$ and using telescoping sums yields: \[ Q(t_2) - Q(t_1) =\sum_{\tau=t_1}^{t_2-1} a(\tau) - \sum_{\tau=t_1}^{t_2-1} \tilde{b}(\tau) \] This proves (\ref{eq:io-tilde}). Inequality (\ref{eq:io-b2}) follows because $\tilde{b}(\tau) \leq b(\tau)$ for all $\tau$. Inequalities (\ref{eq:illuminate}) and (\ref{eq:illuminate2}) follow by substituting $t_1 =0$, $t_2 = t$, and dividing by $t$. \end{proof} The equality (\ref{eq:illuminate}) is illuminating. It shows that $Q(t)/t\rightarrow0$ as $t \rightarrow \infty$ if and only if the time average of the process $a(t) - \tilde{b}(t)$ is zero (where the time average of $a(t) - \tilde{b}(t)$ is the limit of the right hand side of (\ref{eq:illuminate})). This happens when the time average rate of arrivals $a(t)$ is equal to the time average rate of actual departures $\tilde{b}(t)$. This motivates the definitions of \emph{rate stability} and \emph{mean rate stability}, defined in the next section. \section{Rate Stability} Let $Q(t)$ be the backlog process in a discrete time queue. We assume only that $Q(t)$ is non-negative and evolves over slots $t \in \{0, 1, 2, \ldots\}$ according to some probability law.\footnote{All of our stability definitions can be extended to treat discrete time stochastic processes $Q(t)$ that can possibly be negative by substituting $|Q(t)|$ into the definitions, which is sometimes useful in contexts (not treated here) where virtual queues can be possibly negative, as in \cite{neely-universal-scheduling}\cite{neely-mwl-ita}.} \begin{defn} \label{def:rate-stable} A discrete time queue $Q(t)$ is \emph{rate stable} if: \[ \lim_{t\rightarrow\infty} \frac{Q(t)}{t} = 0 \: \: \mbox{ with probability 1} \] \end{defn} \begin{defn} \label{def:mean-rate-stable} A discrete time queue $Q(t)$ is \emph{mean rate stable} if: \[ \lim_{t\rightarrow\infty} \frac{\expect{Q(t)}}{t} = 0 \] \end{defn} Neither rate stability nor mean rate stability implies the other (see counter-examples given in Section \ref{section:counterexamples}). However, rate stability implies mean rate stability under the following mild technical assumptions. \begin{thm} \label{thm:rs-implies-mrs} (Rate Stability \& Bounding Assumptions Implies Mean Rate Stability) Consider a queue $Q(t)$ with dynamics (\ref{eq:q-dynamics}), with $b(t)$ real valued (possibly negative) and $a(t)$ non-negative. Suppose that $Q(t)$ is rate stable. a) Suppose there are finite constants $\epsilon>0$ and $C>0$ such that $\expect{(a(t)+b^{-}(t))^{1+\epsilon}} \leq C$ for all $t$, where $b^-(t)$ is defined: \begin{equation} \label{eq:b-minus} b^-(t) \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} -\min[b(t), 0] \end{equation} Then $Q(t)$ is mean rate stable. b) Suppose there is a non-negative random variable $Y$ with $\expect{Y} < \infty$ and such that for all $t \in \{0, 1, 2, \ldots\}$ we have: \begin{eqnarray} \expect{a(t)+b^-(t)|a(t)+b^-(t)>y}Pr[a(t)+b^-(t)>y] \nonumber \\ \leq \expect{Y|Y>y}Pr[Y>y] \: \: \forall y \in \mathbb{R} \label{eq:stoch-greater} \end{eqnarray} Then $Q(t)$ is mean rate stable. \end{thm} \begin{proof} See Appendix A. \end{proof} We note that the condition (\ref{eq:stoch-greater}) holds whenever the random variable $Y$ is \emph{stochastically greater than or equal to} the random variable $a(t) + b^-(t)$ for all $t$ \cite{ross}. This condition also trivially holds whenever $a(t) + b^-(t)$ is stationary, having the same probability distribution for all slots $t$ but not necessarily being i.i.d. over slots, and satisfies $\expect{a(0) + b^-(0)} < \infty$. This is because, in this stationary case, we can use $Y = a(0) + b^-(0)$. The condition in part (a) does not require stationarity, but requires a uniform bound on the ``$(1+\epsilon)$'' moment for some $\epsilon>0$. This certainly holds whenever the second moments of $a(t) + b^{-}(t)$ are bounded by some finite constant $C$ for all $t$ (so that $\epsilon=1$), as assumed in our network analysis of Section \ref{section:network}. The next theorem gives intuition on rate stability and mean rate stability for queues with well defined time average arrival and server rates. \begin{thm} \label{thm:rate-stability} (Rate Stability Theorem) Suppose $Q(t)$ evolves according to (\ref{eq:q-dynamics}), with $a(t) \geq 0$ for all $t$, and with $b(t)$ real valued (and possibly negative) for all $t$. Suppose that the time averages of the processes $a(t)$ and $b(t)$ converge with probability $1$ to finite constants $a_{av}$ and $b_{av}$, so that: \begin{eqnarray} \lim_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} a(\tau) = a_{av} & \mbox{ with probability $1$} \label{eq:Aav} \\ \lim_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} b(\tau) = b_{av} & \mbox{ with probability $1$} \label{eq:muav} \end{eqnarray} Then: (a) $Q(t)$ is rate stable if and only if $a_{av} \leq b_{av}$. (b) If $a_{av} > b_{av}$, then: \[ \lim_{t\rightarrow\infty} \frac{Q(t)}{t} = a_{av} - b_{av} \: \: \mbox{ with probability 1} \] (c) Suppose there are finite constants $\epsilon>0$ and $C>0$ such that $\expect{(a(t)+b^{-}(t))^{1+\epsilon}} \leq C$ for all $t$, where $b^-(t)$ is defined in (\ref{eq:b-minus}). Then $Q(t)$ is mean rate stable if and only if $a_{av} \leq b_{av}$. (d) Suppose there is a non-negative random variable $Y$ with $\expect{Y} < \infty$ and such that condition (\ref{eq:stoch-greater}) holds for all $t \in \{0, 1, 2, \ldots\}$. Then $Q(t)$ is mean rate stable if and only if $a_{av} \leq b_{av}$. \end{thm} \begin{proof} (Theorem \ref{thm:rate-stability}) Suppose that $Q(t)$ is rate stable, so that $Q(t)/t \rightarrow 0$ with probability $1$. Because (\ref{eq:illuminate2}) holds for all slots $t>0$, we can take limits in (\ref{eq:illuminate2}) as $t \rightarrow \infty$ and use (\ref{eq:Aav})-(\ref{eq:muav}) to conclude that $0 \geq a_{av} - b_{av}$. Thus, $a_{av} \leq b_{av}$ is \emph{necessary} for rate stability. The proof for sufficiency in part (a) and the proof of part (b) are not obvious and are developed in Exercises \ref{ex:rate-stable} and \ref{ex:rate-stability-b} of Section \ref{section:exercise}. To prove parts (c) and (d), suppose that $a_{av} \leq b_{av}$. We thus know by part (a) that $Q(t)$ is rate stable. The conditions in parts (c) and (d) of this theorem correspond to the conditions given in Theorem \ref{thm:rs-implies-mrs}, and hence $Q(t)$ is mean rate stable. Now suppose that $a_{av} > b_{av}$. It follows by part (b) that: \[ \lim_{t\rightarrow\infty} \frac{Q(t)}{t} = a_{av} - b_{av} \: \: \mbox{ with prob. 1} \] Define $\delta \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} (a_{av} - b_{av})/2$. Note that: \[ \lim_{t\rightarrow\infty} Pr[Q(t)/t > \delta] = 1 \] Therefore: \begin{eqnarray*} \expect{\frac{Q(t)}{t}} &\geq& \expect{\frac{Q(t)}{t}|\frac{Q(t)}{t} > \delta} Pr[Q(t)/t>\delta] \\ &\geq& \delta Pr[Q(t)/t > \delta] \end{eqnarray*} Taking a limit yields: \[ \limsup_{t\rightarrow\infty} \expect{\frac{Q(t)}{t}} \geq \delta \] and hence $Q(t)$ is not mean rate stable. \end{proof} Prior sample path investigations of constant service rate queues are provided in \cite{sample-path-queue}\cite{taha-paper}\cite{path-mazumdar}\cite{queue-mazumdar}, where it is shown that rate stability holds whenever the arrival rate is strictly less than the service rate. Our proof of Theorem \ref{thm:rate-stability}(a) uses a different chain of reasoning (developed in Exercises \ref{ex:rate-stable} and \ref{ex:rate-stability-b} of Section \ref{section:exercise}), applies to queues with more general time varying (and possibly negative) service rates, and also shows the case $a_{av} = b_{av}$ ensures rate stability, which establishes the simple necessary and sufficient condition $a_{av} \leq b_{av}$. The assumption that $a(t)$ and $b(t)$ have well defined time averages $a_{av}$ and $b_{av}$ is crucial for the result of Theorem \ref{thm:rate-stability}. One might intuitively suspect that if $a(t)$ has a well defined time average $a_{av}$, but $b(t)$ has $\limsup$ and $\liminf$ time averages $b_{av}^{inf}$ and $b_{av}^{sup}$ such that $a_{av} < b_{av}^{inf} < b_{av}^{sup}$, then $Q(t)$ is also rate stable. This is not always true. Thus, the existence of well defined time averages provides enough structure to ensure queue sample paths are well behaved. The following theorem presents a more general necessary condition for rate stability that does not require the arrival and server processes to have well defined time averages. \begin{thm} \label{thm:gen-nec-rate} (Necessary Condition for Rate Stability) Suppose $Q(t)$ evolves according to (\ref{eq:q-dynamics}), with any general processes $a(t)$ and $b(t)$ such that $a(t) \geq 0$ for all $t$. Then: (a) If $Q(t)$ is rate stable, then: \begin{equation} \label{eq:limsup-rs} \limsup_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} [a(\tau) - b(\tau)] \leq 0 \: \: \mbox{ with probability 1} \end{equation} (b) If $Q(t)$ is mean rate stable and if $\expect{Q(0)} < \infty$, then: \begin{equation} \label{eq:limsup-mrs} \limsup_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{a(\tau) - b(\tau)} \leq 0 \end{equation} \end{thm} \begin{proof} The proof of (a) follows immediately by taking a $\limsup$ of both sides of (\ref{eq:illuminate2}) and noting that $Q(t)/t \rightarrow 0$ because $Q(t)$ is rate stable. The proof of (b) follows by first taking an expectation of (\ref{eq:illuminate2}) and then taking limits. \end{proof} \section{Stronger Forms of Stability} Rate stability and mean rate stability only describe the long term average rate of arrivals and departures from the queue, and do not say anything about the fraction of time the queue backlog exceeds a certain value, or about the time average expected backlog. The stronger stability definitions given below are thus useful. \begin{defn} A discrete time queue $Q(t)$ is \emph{steady state stable} if: \[ \lim_{M\rightarrow\infty} g(M) = 0 \] where for each $M\geq 0$, $g(M)$ is defined: \begin{equation} \label{eq:gm} g(M) \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} \limsup_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} Pr[Q(\tau) > M] \end{equation} \end{defn} \begin{defn} A discrete time queue $Q(t)$ is \emph{strongly stable} if: \begin{equation} \label{eq:strongly-stable} \limsup_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{Q(\tau)} < \infty \end{equation} \end{defn} For discrete time ergodic Markov chains with countably infinite state space and with the property that, for each real value $M$, the event $\{Q(t) \leq M\}$ corresponds to only a finite number of states, steady state stability implies the existence of a steady state distribution, and strong stability implies finite average backlog and (by Little's theorem \cite{bertsekas-data-nets}) finite average delay. Under mild boundedness assumptions, strong stability implies all of the other forms of stability, as specified in Theorem \ref{thm:strong-stability} below. \begin{thm} \label{thm:strong-stability} (Strong Stability Theorem) Suppose $Q(t)$ evolves according to (\ref{eq:q-dynamics}) for some general stochastic processes $\{a(t)\}_{t=0}^{\infty}$ and $\{b(t)\}_{t=0}^{\infty}$, where $a(t) \geq 0$ for all $t$, and $b(t)$ is real valued for all $t$. Suppose $Q(t)$ is strongly stable. Then: (a) $Q(t)$ is steady state stable. (b) If there is a finite constant $C$ such that either $a(t) + b^{-}(t) \leq C$ with probability 1 for all $t$ (where $b^-(t)$ is defined in (\ref{eq:b-minus})), or $b(t) - a(t) \leq C$ with probability 1 for all $t$, then $Q(t)$ is rate stable, so that $Q(t)/t \rightarrow 0$ with probability $1$. (c) If there is a finite constant $C$ such that either $\expect{a(t) + b^{-}(t)} \leq C$ for all $t$, or $\expect{b(t) - a(t)} \leq C$ for all $t$, then $Q(t)$ is mean rate stable. \end{thm} \begin{proof} Part (a) is given in Exercise \ref{ex:strong-stability-implies-steady-state}. Part (c) is given in Appendix B, and part (b) is given in Appendix C. \end{proof} The above theorem shows that, under mild technical assumptions, strong stability implies all three other forms of stability. Theorem \ref{thm:rs-implies-mrs} and Theorem \ref{thm:strong-stability}(c) show that (under mild technical assumptions) rate stability and strong stability both imply mean rate stability. For completeness, the following theorem provides conditions under which steady state stability implies mean rate stability. Collectively, these results can be viewed as showing that strong stability is the \emph{strongest} definition of the four, and mean rate stability is the \emph{weakest} definition of the four. \begin{thm} \label{thm:ss-implies-mrs} Assume $Q(t)$ evolves according to (\ref{eq:q-dynamics}) with $a(t)\geq 0$ and $b(t)$ real values for all $t$. Suppose that $Q(t)$ is steady state stable, and that there is a finite constant $C$ such that $a(t) + b^-(t) \leq C$ with probability 1 for all $t$. Then $Q(t)$ is mean rate stable. \end{thm} \begin{proof} See Appendix D. \end{proof} \subsection{Sample Path Versions of Stability} One might use a sample-path version of strong stability, saying that a queue is \emph{sample-path strongly stable} if: \[ \limsup_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} Q(\tau) < \infty \: \: \mbox{ with prob. 1} \] A sample-path version of steady-state stable would re-define the function $g(M)$ in (\ref{eq:gm}) by a function $h(M)$ as follows: \[ h(M) \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} \limsup_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} 1\{Q(\tau)>M\} \] where $1\{Q(\tau)>M\}$ is an indicator function that is $1$ if $Q(\tau)>M$, and zero otherwise. We might say that the queue is \emph{sample-path steady-state stable} if $\lim_{M\rightarrow\infty} h(M) = 0$ with probability 1. These two additional stability definitions are again implied by strong stability if one assumes the system has well defined limits (which is typically the case in systems defined on Markov chains), as shown in Appendix E. \subsection{A Problematic Stability Definition} \label{section:problematic} Finally, one might define another form of stability by requiring: \begin{equation} \label{eq:stronger} \limsup_{t\rightarrow\infty} \expect{Q(t)} < \infty \end{equation} It is clear that if $\expect{Q(t)} < \infty$ for all $t$ and if the above holds, then the time average of $\expect{Q(t)}$ is also finite and so $Q(t)$ is strongly stable. Hence, the condition (\ref{eq:stronger}) can be viewed as being even ``stronger'' than strong stability. Of course, in most systems defined over Markov chains, strong stability is equivalent to (\ref{eq:stronger}). However, we do not consider (\ref{eq:stronger}) in our set of stability definitions for two reasons: \begin{enumerate} \item The strong stability definition that uses a time average is easier to work with, especially in Lyapunov drift arguments \cite{now}. \item The condition (\ref{eq:stronger}) leads to a problematic counterexample if it were used as a definition of stability, as described below. \end{enumerate} To see why the definition (\ref{eq:stronger}) may be problematic, consider a simple discrete time ``Bernoulli/Bernoulli/1'' ($B/B/1$) queue, where arrivals $a(t)$ are i.i.d. over slots with $Pr[a(t) = 1] = \lambda$ and $Pr[a(t) = 0] = 1-\lambda$, and the server process $b(t)$ is independent and i.i.d. over slots with $Pr[b(t) = 1] = \mu$, $Pr[b(t) = 0] = 1-\mu$. When $\lambda < \mu$, it is easy to write the ergodic birth-death Markov chain for this system, and one can easily derive that the Markov chain has a well defined steady state, steady state probabilities decay geometrically, and average queue backlog and delay satisfy: \[ \overline{Q} = \frac{\lambda(1-\lambda)}{\mu-\lambda} \: \: , \: \: \overline{W} = \frac{1-\lambda}{\mu-\lambda} \] Thus, if $\lambda < \mu$, a good definition of stability would say that this system is stable. Now suppose we take one particular sample path realization of the $B/B/1$ queue, one for which time averages converge to the steady state values (which happens with probability 1). However, treat this sample path as given, so that all events are now \emph{deterministic}. Thus, $a(t)$ and $b(t)$ are now deterministic functions. Because we have not changed the actual sample path, a good definition should also say this deterministic variant is stable. In this deterministic case, we have $\expect{Q(t)} = Q(t)$ for all $t$, and so the expectation can grow arbitrarily large (since a $B/B/1$ queue can grow arbitrarily large). Hence, for this deterministic example: \[ \limsup_{t\rightarrow\infty} \expect{Q(t)} = \infty \] Thus, if we used (\ref{eq:stronger}) as a definition of stability, the random $B/B/1$ queue would be stable, but the deterministic version would not! Another way of saying this is that the original $B/B/1$ queue is stable, but if we condition on knowing all future events, it becomes unstable! Because our definitions of rate stability, mean rate stability, sample path stability, and strong stability incorporate time averages, these four forms of stability all say that both the random $B/B/1$ queue and its deterministic counterpart are stable. Hence, the problem does not arise for this example in any of these four definitions. Another problematic stability definition says that a queue is ``empty-stable'' if it empties infinitely often. A discussion of why this is problematic is provided in \cite{neely-thesis}. \section{Counter-Examples} \label{section:counterexamples} Here we provide counter-examples that show what can happen if the boundedness assumptions of Theorems \ref{thm:rs-implies-mrs} and \ref{thm:strong-stability} are violated. \subsection{Rate Stability Does Not Imply Mean Rate Stability} \label{subsection:counterexamples-rate-mean} Let $T$ be an integer random variable with a geometric distribution, such that $Pr[T>t] = 1/2^t$ for $t \in \{0, 1, 2, \ldots\}$. Define $Q(t)$ over $t \in \{0, 1, 2, \ldots\}$ as follows: \[ Q(t) = \left\{ \begin{array}{ll} 2^{(2t)} &\mbox{ if $t < T$} \\ 0 & \mbox{ otherwise} \end{array} \right.\] It follows that $Q(t)/t \rightarrow 0$ with probability 1 (as eventually $t$ becomes larger than the random variable $T$). However: \[ \expect{Q(t)} = 2^{(2t)} Pr[t < T] = 2^{(2t)} 2^{-t} = 2^{t} \] Therefore: \[ \lim_{t\rightarrow\infty} \expect{Q(t)}/t = \lim_{t\rightarrow\infty} 2^t/t = \infty \] and hence $Q(t)$ is not mean rate stable. \subsection{Mean Rate Stability Does Not Imply Rate Stability} Suppose that $Q(0)=0$, and that $Q(t)$ has independent values every slot $t$ for $t \in \{1, 2, 3, \ldots\}$, so that: \[ Q(t) = \left\{ \begin{array}{ll} t &\mbox{ with probability $1/t$} \\ 0 & \mbox{ with probability $1- 1/t$} \end{array} \right.\] Thus, for any time $t>0$ we have: \[ \expect{Q(t)} = t/t = 1 \] It follows that $\expect{Q(t)}/t \rightarrow 0$, and so $Q(t)$ is mean rate stable. However, clearly $Q(t)/t = 1$ for any time $t$ such that $Q(t)>0$. Because the probabilities of the independent events at which $Q(t)>0$ decay very slowly, it can be shown that there are an infinite number of times $t_n$ for which $Q(t_n) > 0$. Indeed, we have for any time $t>0$: \[ Pr[\mbox{$Q(\tau) = 0$ for all $\tau\geq t$}] = \prod_{\tau=t}^{\infty} (1-1/\tau) = 0 \] The infinite product can be shown to be zero for any $t>0$ by taking a $\log(\cdot)$ and showing that the resulting infinite sum is equal to $-\infty$. Therefore: \[ \limsup_{t\rightarrow\infty} \frac{Q(t)}{t} = 1 \: \: \mbox{ with probability 1} \] and so the system is not rate stable. \subsection{Strong Stability Neither Implies Mean Rate Stability Nor Rate Stability} Suppose that $Q(t) = t$ for $t \in \{1, 2, 4, 8, \ldots, 2^n, \ldots\}$ (i.e., for all timeslots $t$ that are powers of $2$). Suppose that $Q(t) = 0$ at all slots $t$ that are not powers of 2. Because this process is deterministic, we have $\expect{Q(t)} = Q(t)$, and for all $n \in \{0, 1, 2, \ldots\}$ we have: \[ \frac{1}{2^n+ 1} \sum_{\tau=0}^{2^n} Q(\tau) = \frac{1 + 2 + \ldots + 2^n}{2^n+1} = \frac{2^{n+1} - 1}{2^n + 1} \] The right hand side of the above expression converges to $2$ as $n \rightarrow \infty$. It can be shown that $\frac{1}{t}\sum_{\tau=0}^{t-1} Q(\tau)$ is the largest when sampled at the times in the above expression, and hence: \[ \limsup_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} Q(\tau) = 2 \] Because $\expect{Q(\tau)} = Q(\tau)$ for all $\tau$, it follows that: \[ \limsup_{t\rightarrow\infty}\frac{1}{t}\sum_{\tau=0}^{t-1}\expect{Q(\tau)} = 2 \] Therefore, $Q(t)$ is strongly stable. However, $Q(2^n)/2^n = 1$ for all $n \in \{1, 2, \ldots\}$, and so $Q(t)$ is not rate stable or mean rate stable (rate stability and mean rate stability are equivalent when $Q(t)$ is deterministic). Note in this case that increases or decreases in $Q(t)$ can be arbitrarily large, and hence this example does not satisfy the boundedness assumptions required in Theorem \ref{thm:strong-stability}. \section{Network Scheduling} \label{section:network} Consider now the following multi-queue network model defined over discrete time $t\in\{0, 1, 2, \ldots\}$. There are $K$ queues $\bv{Q}(t) = (Q_1(t), \ldots, Q_K(t))$, with dynamics for all $k \in \{1, \ldots, K\}$: \begin{equation} \label{eq:tilde-q} Q_k(t+1) = Q_k(t) - \tilde{b}_k(t) + \tilde{y}_k(t) + a_k(t) \end{equation} where $a_k(t)$ represents new exogenous arrivals to queue $k$ on slot $t$, $\tilde{b}_k(t)$ represents the actual amount served on slot $t$, and $\tilde{y}_k(t)$ represents additional arrivals. The additional arrivals $\tilde{y}_k(t)$ may be due to flow control operations (in which case we might have $a_k(t) = 0$ so that all new arrivals are first passed through the flow control mechanism). They might also be due to endogenous arrivals from other queues, which allows treatment of multi-hop networks. We assume the queue is always non-negative, as are $\tilde{b}_k(t)$, $\tilde{y}_k(t)$, $a_k(t)$, and that the $\tilde{b}_k(t)$ and $\tilde{y}_k(t)$ values respect the amount of data that is actually in each queue (not serving more or delivering more than the amount transferred over the channel). It is also useful to assume transmission decisions can be made independently of queue backlog, and so we also define $b_k(t)$ and $y_k(t)$ for queue dynamics: \begin{equation} \label{eq:multi-q-dynamics} Q_k(t+1) \leq \max[Q_k(t) - b_k(t), 0] + y_k(t) + a_k(t) \end{equation} The inequality is due to the fact that the amount of actual new exogenous arrivals $\tilde{y}_k(t)$, being a sum of service values in other queues that transmit to queue $k$, may not be as large as $y_k(t)$ if these other queues have little or no backlog to transmit. These values satisfy: \begin{eqnarray*} 0 \leq \tilde{y}_k(t) \leq y_k(t) & \forall k \in \{1, \ldots, K\}, \forall t \\ 0 \leq \tilde{b}_k(t) \leq b_k(t) & \forall k \in \{1, \ldots, K\}, \forall t \end{eqnarray*} This model is similar to that given in \cite{now}\cite{neely-power-network-jsac}\cite{neely-thesis}, and the capacity region that we develop is also similar to prior work there. The network has a time varying \emph{network state} $\omega(t)$, possibly being a vector of channel conditions and/or additional random arrivals for slot $t$. A \emph{network control action} $\alpha(t)$ is chosen in reaction to the observed network state $\omega(t)$ on slot $t$ (and possibly also in reaction to other network information, such as queue backlogs), and takes values in some abstract set $\script{A}_{\omega(t)}$ that possibly depends on $\omega(t)$. The $\omega(t)$ and $\alpha(t)$ values for slot $t$ affect arrivals and service by: \begin{eqnarray*} y_k(t) &=& \hat{y}_k(\alpha(t), \omega(t)) \: \: \forall k \in \{1, \ldots, K\} \\ b_k(t) &=& \hat{b}_k(\alpha(t), \omega(t)) \: \: \forall k \in \{1, \ldots, K\} \end{eqnarray*} where $\hat{y}_k(\alpha(t), \omega(t))$ and $\hat{b}_k(\alpha(t), \omega(t))$ are general functions of $\alpha(t)$ and $\omega(t)$ (possibly non-convex and discontinuous). The $\omega(t)$ and $\alpha(t)$ values also affect an \emph{attribute vector} $\bv{x}(t) = (x_1(t), \ldots, x_M(t))$ for slot $t$, which can represent additional penalties or rewards associated with the network states and control actions (such as power expenditures, packet admissions, packet drops, etc.). The components $x_m(t)$ can possibly be negative, and are general functions of $\alpha(t)$ and $\omega(t)$: \[ x_m(t) = \hat{x}_m(\alpha(t), \omega(t)) \] \subsection{Network Assumptions} We assume that exogenous arrivals $a_k(t)$ satisfy: \begin{equation} \label{eq:lambda-k-assumption} \lim_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{a_k(\tau)} = \lambda_k \: \: \forall k \in \{1, \ldots, K\} \end{equation} where we call $\lambda_k$ the arrival rate for queue $k$. We assume the network state $\omega(t)$ is \emph{stationary} with a well defined stationary distribution $\pi(\omega)$. In the case when there are only a finite or countably infinite number of network states, given by a set $\Omega$, then $\pi(\omega)$ represents a probability mass function and by stationarity we have: \[ \pi(\omega) = Pr[\omega(t) = \omega] \: \: \forall \omega \in \Omega, \forall t \in \{0, 1, 2, \ldots\} \] In the case when $\Omega$ is possibly countably infinite, then we assume $\omega(t)$ is a random vector and $\pi(\omega)$ represents a probability density function. The simplest model is when $\omega(t)$ is i.i.d. over slots, although the stationary assumption does not require independence over slots. We further assume that the control decision $\alpha(t) \in \script{A}_{\omega(t)}$ can always be chosen to respect the backlog constraints, and that any algorithm that does not respect the backlog constraints can be modified to respect the backlog constraints without hindering performance. This can be done simply by never attempting to transmit more data than we have, so that for all $k \in \{1, \ldots, K\}$ we have: \begin{eqnarray} \hat{b}_k(\alpha(t), \omega(t)) = b_k(t) = \tilde{b}_k(t) \label{eq:qc1} \\ \hat{y}_k(\alpha(t), \omega(t)) = y_k(t) = \tilde{y}_k(t) \label{eq:qc2} \end{eqnarray} We define a policy $\alpha(t)$ that satisfies (\ref{eq:qc1})-(\ref{eq:qc2}) to be a \emph{policy that respects queue backlog}. It is clear that the queue dynamics (\ref{eq:multi-q-dynamics}) under such a policy become: \begin{equation} \label{eq:qc-dynamics} Q_k(t+1) = Q_k(t) - b_k(t) + y_k(t) + a_k(t) \: \: \forall k \in \{1, \ldots, K\}, \forall t \end{equation} \subsection{The Optimization Problem} Let $f(\bv{x})$ and $g_1(\bv{x}), g_2(\bv{x}), \ldots, g_L(\bv{x})$ be real-valued, continuous, and convex functions of $\bv{x} \in \mathbb{R}^M$ for some non-negative integer $L$ (if $L=0$ then there are no $g_l(\bv{x})$ functions).\footnote{These functions might be defined over only a suitable subset of $\mathbb{R}^M$, such as the set of all non-negative vectors.} Suppose we want to design a control algorithm that chooses $\alpha(t) \in \script{A}_{\omega(t)}$ over slots $t$ that solves the following general stochastic network optimization problem: \begin{eqnarray} \hspace{-.3in}\mbox{Minimize:} &&\limsup_{t\rightarrow\infty} f(\overline{\bv{x}}(t)) \label{eq:opt1} \\ \hspace{-.3in}\mbox{Subject to:} &1)& \limsup_{t\rightarrow\infty} g_l(\overline{\bv{x}}(t)) \leq 0 \: \: \forall l \in \{1, \ldots, L\} \label{eq:opt2} \\ \hspace{-.3in}&2)& \alpha(t) \in \script{A}_{\omega(t)} \: \: \forall t \in \{0, 1, 2, \ldots\} \label{eq:opt2-point-5} \\ \hspace{-.3in}&3)& \mbox{All queues $Q_k(t)$ are mean rate stable} \label{eq:opt3} \end{eqnarray} where $\overline{\bv{x}}(t)$ is defined for $t>0$ by: \[ \overline{\bv{x}}(t) \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{\bv{x}(\tau)} \] We say that the problem is \emph{feasible} if there exists a control algorithm that satisfies the constraints (\ref{eq:opt2})-(\ref{eq:opt3}). Assuming the problem is feasible, we define $f^{opt}$ as the infimum cost in (\ref{eq:opt1}) over all possible feasible policies that respect queue backlog. We define an \emph{$\omega$-only policy} as a policy that observes $\omega(t)$ and makes a decision $\alpha(t) \in \script{A}_{\omega(t)}$ as a stationary and random function only of $\omega(t)$ (regardless of queue backlog, and hence not necessarily respecting the backlog constraints (\ref{eq:qc1})-(\ref{eq:qc2})). By stationarity of $\omega(t)$, it follows that the expected values of $b_k(t)$, $y_k(t)$ $x_m(t)$ are the same on each slot under a particular $\omega$-only policy $\alpha^*(t) \in \script{A}_{\omega(t)}$: \begin{eqnarray*} \overline{b}_k &=& \expect{\hat{b}_k(\alpha^*(t), \omega(t))} \\ \overline{y}_k &=& \expect{\hat{y}_k(\alpha^*(t), \omega(t))} \\ \overline{x}_m &=& \expect{\hat{x}_m(\alpha^*(t), \omega(t))} \end{eqnarray*} where the expectation above is with respect to the stationary distribution $\pi(\omega)$ and the possibly randomized actions $\alpha^*(t)$. The next theorem characterizes all possible feasible algorithms (including algorithms that are not $\omega$-only) in terms of $\omega$-only algorithms. \begin{thm} \label{thm:omega-only} Suppose the problem (\ref{eq:opt1})-(\ref{eq:opt3}) is feasible with infimum cost $f^{opt}$, assumed to be achievable arbitrarily closely by policies that respect the backlog constraints (\ref{eq:qc1})-(\ref{eq:qc2}). Then for all $\epsilon>0$ there exists an $\omega$-only algorithm $\alpha^*(t)$ that satisfies the following for all $k \in \{1, \ldots, K\}$ and $l \in \{1, \ldots, L\}$: \begin{eqnarray} g_l(\expect{\hat{\bv{x}}(\alpha^*(t), \omega(t))}) \leq \epsilon \label{eq:oo-1} \\ \lambda_k + \expect{\hat{y}_k(\alpha^*(t), \omega(t)) - \hat{b}_k(\alpha^*(t), \omega(t))} \leq \epsilon \label{eq:oo-2} \\ f(\expect{\hat{\bv{x}}(\alpha^*(t), \omega(t))}) \leq f^{opt} + \epsilon \label{eq:oo-3} \end{eqnarray} \end{thm} Before proving Theorem \ref{thm:omega-only}, we present a preliminary lemma. \begin{lem} \label{lem:omega-only} For any algorithm that chooses $\alpha(\tau) \in \script{A}_{\omega(\tau)}$ over slots $\tau \in \{0, 1, 2, \ldots, \}$, and for any slot $t>0$, there exists an $\omega$-only policy $\alpha^*(t)$ that yields the following for all $k \in \{1, \ldots, K\}$, $m \in \{1, \ldots, M\}$: \begin{eqnarray*} \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{\hat{b}_k(\alpha(t), \omega(t))} &=& \expect{\hat{b}_k(\alpha^*(t), \omega(t))} \\ \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{\hat{y}_k(\alpha(t), \omega(t))} &=& \expect{\hat{y}_k(\alpha^*(t), \omega(t))} \\ \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{\hat{x}_m(\alpha(t), \omega(t))} &=& \expect{\hat{x}_m(\alpha^*(t), \omega(t))} \end{eqnarray*} \end{lem} \begin{proof} (Lemma \ref{lem:omega-only}) For a given slot $t>0$, run the $\alpha(\tau)$ policy and generate random quantities $[\tilde{\omega}, \tilde{\alpha}]$ as follows: Uniformly pick a time $T \in \{0, 1, \ldots, t-1\}$, and define $[\tilde{\omega}, \tilde{\alpha}] \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} [\omega(T), \alpha(T)]$, being the network state observed at the randomly chosen time $T$ and the corresponding network action $\alpha(T)$. Because $\omega(\tau)$ is stationary, it follows that $\tilde{\omega}$ has the stationary distribution $\pi(\omega)$. Now define the $\omega$-only policy $\alpha^*(t)$ to choose $\omega \in \script{A}_{\omega(t)}$ according to the conditional distribution of $\tilde{\alpha}$ given $\tilde{\omega}$ (generated from the joint distribution of $[\tilde{\omega}, \tilde{\alpha}]$). It follows that the expectations of $\hat{b}_k(\cdot)$, $\hat{y}_k(\cdot)$, $\hat{x}_m(\cdot)$ under this $\omega$-only policy $\alpha^*(t)$ are as given in the statement of the lemma. \end{proof} \begin{proof} (Theorem \ref{thm:omega-only}) Fix $\epsilon>0$, and suppose $\alpha(t)$ is a policy that respects the queue backlog constraints (\ref{eq:qc1})-(\ref{eq:qc2}), satisfies the feasibility constraints (\ref{eq:opt2})-(\ref{eq:opt3}), and such that: \[ \limsup_{t\rightarrow\infty} f(\overline{\bv{x}}(t)) \leq f^{opt} + \epsilon/2 \] Then $\expect{Q_k(t)/t}\rightarrow 0$ for all $k \in \{1, \ldots, K\}$, and there is a slot $t^*>0$ such that: \begin{eqnarray} \expect{Q_k(t^*)/t^*} \leq \epsilon & \forall k \in \{1, \ldots, K\} \label{eq:omega-only-foo} \\ g_l(\overline{\bv{x}}(t^*)) \leq \epsilon & \forall l \in \{1, \ldots, L\} \label{eq:omega-only-g} \\ \frac{1}{t^*}\sum_{\tau=0}^{t^*-1}\expect{a_k(\tau)} \geq \lambda_k - \epsilon & \forall k \in \{1, \ldots, K\} \label{eq:omega-only-lambda} \\ f(\overline{\bv{x}}(t)) \leq f_{opt} + \epsilon \label{eq:omega-only-f} \end{eqnarray} where (\ref{eq:omega-only-lambda}) holds by (\ref{eq:lambda-k-assumption}). By (\ref{eq:qc-dynamics}) we have for all $k \in \{1, \ldots, K\}$: \begin{eqnarray*} \sum_{\tau=0}^{t^*-1} [a_k(\tau) + y_k(\tau) - b_k(\tau)] &=& Q_k(t^*) - Q_k(0) \\ &\leq& Q_k(t^*) \end{eqnarray*} Taking expectations, dividing by $t^*$, and using (\ref{eq:omega-only-foo}) yields: \begin{eqnarray} \frac{1}{t^*}\sum_{\tau=0}^{t^*-1}\expect{a_k(\tau) + y_k(\tau) - b_k(\tau)} \leq \epsilon \label{eq:plug-t-star} \end{eqnarray} The above holds for all $k \in \{1, \ldots, K\}$. Using Lemma \ref{lem:omega-only}, we know there must be an $\omega$-only policy $\alpha^*(t)$ that satisfies for all $l \in \{1, \ldots, L\}$ and all $k \in \{1, \ldots, K\}$: \begin{eqnarray*} g_l(\expect{\hat{\bv{x}}(\alpha^*(t), \omega(t))}) &\leq& \epsilon \\ \lambda_k + \expect{\hat{y}_k(\alpha^*(t), \omega(t)) - \hat{b}_k(\alpha^*(t), \omega(t))} &\leq& 2\epsilon \\ f(\overline{\bv{x}}(t)) &\leq& f^{opt} + \epsilon \end{eqnarray*} Redefining $\epsilon' = 2\epsilon$ proves the result. \end{proof} We note that if $\omega(t)$ is defined by a periodic Markov chain, then it can be made stationary by randomizing over the period. In the case when $\omega(t)$ takes values in a finite set $\Omega$, we can prove the same result without the stationary assumption \cite{neely-power-network-jsac}\cite{neely-thesis}. \section{The Capacity Region} Now define $\Lambda$ as the the set of all non-negative rate vectors $\bv{\lambda} = (\lambda_1, \ldots, \lambda_K)$ such that for all $\epsilon>0$, there exists an $\omega$-only policy such that the constraints (\ref{eq:oo-1})-(\ref{eq:oo-2}) of Theorem \ref{thm:omega-only} hold. It can be shown that this set $\Lambda$ is a closed set. By Theorem \ref{thm:omega-only}, we know that $\bv{\lambda} \in \Lambda$ is \emph{necessary} for the existence of an algorithm that makes all queues $Q_k(t)$ mean rate stable and that satisfies the $g_l(\cdot)$ constraints (\ref{eq:opt2}). Because, under some mild technical assumptions, mean rate stability is the \emph{weakest} form of stability, it follows that the constraint $\bv{\lambda} \in \Lambda$ is \emph{also} a necessary condition for stabilizing the network (subject to the $g_l(\cdot)$ constraints) under either rate stability, steady state stability, or strong stability. We now show that the set $\Lambda$ is the \emph{network capacity region}, in the sense that, under some mild additional assumptions on the processes $a_k(t)$ and $\omega(t)$, it is possible to make all queues $Q_k(t)$ \emph{strongly stable} whenever $\bv{\lambda}$ is an \emph{interior point} of $\Lambda$. Because the technical assumptions we introduce will also imply that strong stability is the strongest stability definition, it follows that the same algorithm that makes all queues $Q_k(t)$ strongly stable also makes them rate stable, mean rate stable, and steady state stable. For simplicity of exposition, we treat the case when the functions $f(\bv{x})$, $g_l(\bv{x})$ are \emph{linear or affine} (the case of convex functions is treated in \cite{now}\cite{neely-thesis}\cite{neely-fairness-infocom05}\cite{neely-universal-scheduling}). Note that $\expect{f(\bv{X})} = f(\expect{\bv{X}})$ for any linear or affine function and any random vector $\bv{X}$. \subsection{The Decaying Memory Property} \label{section:decaying-memory} Suppose that $\omega(t)$ has stationary distribution $\pi(\omega)$ as before, and that arrival processes $a_k(t)$ have rates $\lambda_k$ that satisfy (\ref{eq:lambda-k-assumption}). Define $H(t)$ as the \emph{history} of the system over slots $\tau \in \{0, 1, \ldots, t-1\}$, consisting of the initial queue states $Q_k(0)$ and all $\omega(\tau)$, $\alpha(\tau)$ values over this interval. We say that the processes $a_k(t)$ and $\omega(t)$ together with the functions $\hat{b}_k(\cdot)$, $\hat{y}_k(\cdot)$, $g_l(\hat{\bv{x}}(\cdot))$, have the \emph{decaying memory property} if for any $\omega$-only policy $\alpha^*(t)$ and any $\delta>0$, there is an integer $T>0$ (which may depend on $\delta$ and $\alpha^*(t)$) such that for all slots $t_0\geq 0$, all possible values of $H(t_0)$, and all $k \in \{1, \ldots, K\}$, $l \in \{1, \ldots, L\}$ we have: \begin{eqnarray} \frac{1}{T}\sum_{\tau=t_0}^{t_0+T-1} \expect{a_k(\tau) + \hat{y}_k(\alpha^*(\tau), \omega(\tau))|H(t_0)} \nonumber \\ - \frac{1}{T}\sum_{\tau=t_0}^{t_0+T-1} \expect{\hat{b}_k(\alpha^*(\tau), \omega(\tau))|H(t_0) } \leq \lambda_k \nonumber \\ + \mathbb{E}_{\pi}\left\{\hat{y}_k(\alpha^*(t), \omega(t)) - \hat{b}_k(\alpha^*(t), \omega(t)) \right\} + \delta \label{eq:dm1} \\ \frac{1}{T}\sum_{\tau=t_0}^{t_0+T-1}\expect{g_l(\hat{\bv{x}}(\alpha(\tau), \omega(\tau)))|H(t_0)} \nonumber \\ - \mathbb{E}_{\pi}\left\{ g_l(\hat{\bv{x}}(\alpha^*(t), \omega(t))) \right\}\leq \delta \label{eq:dm2} \\ \frac{1}{T}\sum_{\tau=t_0}^{t_0+T-1}\expect{f(\hat{\bv{x}}(\alpha(\tau), \omega(\tau)))|H(t_0)} \nonumber \\ - \mathbb{E}_{\pi}\left\{f(\hat{\bv{x}}(\alpha^*(t), \omega(t))) \right\}\leq \delta \label{eq:dm3} \end{eqnarray} where $\mathbb{E}_{\pi}\left\{\cdot\right\}$ represents an expectation over the stationary distribution $\pi(\omega)$ for $\omega(t)$. Intuitively, the decaying memory property says that the affects of past history decay over $T$ slots, so that all conditional time average expectations over this interval are within $\delta$ of their stationary values. This property can be shown to hold when $\omega(t)$ and $a_k(t)$ are driven by a finite state irreducible (possibly not aperiodic) Markov chain, and when the conditional expectation of all processes is finite given the current state. \subsection{Second Moment Boundedness Assumptions} \label{section:second-moment} We assume that for all $t$ and all (possibly randomized) control actions $\alpha(t) \in \script{A}_{\omega(t)}$, the second moment of the processes are bounded, so that there is a finite constant $\sigma^2$ such that: \begin{eqnarray*} \expect{\hat{y}_k(\alpha(t), \omega(t))^2} &\leq& \sigma^2 \\ \expect{\hat{b}_k(\alpha(t), \omega(t))^2} &\leq& \sigma^2 \\ \expect{g_l(\hat{\bv{x}}(\alpha(t), \omega(t)))^2} &\leq& \sigma^2 \\ \expect{a_k(t)^2} &\leq& \sigma^2 \end{eqnarray*} Note that these second moment assumptions also ensure first moments are bounded. Finally, we assume the first moment of $f(\bv{x}(t))$ is bounded by finite constants $f_{min}$ and $f_{max}$, so that for all (possibly randomized) control actions $\alpha(t) \in \script{A}_{\omega(t)}$ we have: \begin{equation} \label{eq:f-bounded} f_{min} \leq \expect{f(\hat{\bv{x}}(\alpha(t), \omega(t)))} \leq f_{max} \end{equation} \subsection{Lyapunov Drift} We use the framework of \cite{now}\cite{neely-thesis}\cite{neely-energy-it} to design a policy to solve the optimization problem (\ref{eq:opt1})-(\ref{eq:opt3}). To this end, for each inequality constraint (\ref{eq:opt2}) define a \emph{virtual queue} $Z_l(t)$ that is initially empty and that has update equation: \begin{equation} \label{eq:z-dynamics} Z_l(t+1) = \max[Z_l(t) + g_l(\bv{x}(t)), 0] \end{equation} where $\bv{x}(t) = \hat{\bv{x}}(\alpha(t), \omega(t))$. The actual queues $Q_k(t)$ are assumed to satisfy (\ref{eq:multi-q-dynamics}). Define $\bv{\Theta}(t) \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} [\bv{Q}(t), \bv{Z}(t)]$ as a composite vector of all actual and virtual queues. Define a \emph{Lyapunov function} $L(\bv{\Theta}(t))$ as follows: \[ L(\bv{\Theta}(t)) \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} \frac{1}{2}\sum_{k=1}^KQ_k(t)^2 + \frac{1}{2}\sum_{l=1}^LZ_l(t)^2 \] For a given integer $T>0$, define the \emph{$T$-step conditional Lyapunov drift} $\Delta_T(\bv{\Theta}(t))$ as follows:\footnote{Strictly speaking, better notation is $\Delta_T(\bv{\Theta}(t), t)$, although we use the simpler notation $\Delta_T(\bv{\Theta}(t))$ as a formal representation of the right hand side of (\ref{eq:drift-def}).} \begin{equation}\label{eq:drift-def} \Delta_T(\bv{\Theta}(t)) \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} \expect{L(\bv{\Theta}(t+T)) - L(\bv{\Theta}(t))|\bv{\Theta}(t)} \end{equation} \begin{lem} \label{lem:driftyy} For any control policy and for any parameter $V\geq0$, $\Delta_T(\bv{\Theta}(t))$ satisfies: \begin{eqnarray} && \Delta_T(\bv{\Theta}(t)) + V\sum_{\tau=t}^{t+T-1}\expect{f(\bv{x}(\tau))|\bv{\Theta}(t)} \leq \nonumber \\ && T^2\expect{\hat{B}|\bv{\Theta}(t)} + V\sum_{\tau=t}^{t+T-1}\expect{f(\bv{x}(\tau))|\bv{\Theta}(t)} \nonumber \\ && + \sum_{k=1}^K\sum_{\tau=t}^{t+T-1}\expect{Q_k(t)[a_k(\tau) + y_k(\tau) - b_k(\tau)]|\bv{\Theta}(t)}\nonumber \\ && + \sum_{l=1}^L\sum_{\tau=t}^{t+T-1}\expect{Z_l(t)g_l(\bv{x}(\tau))|\bv{\Theta}(t)} \label{eq:drift} \end{eqnarray} where $\hat{B}$ is a random variable that satisfies: \[ \expect{\hat{B}} \leq B \] where $B$ is a finite constant that depends on the worst case second moment bounds of the $a_k(\tau)$, $b_k(\tau)$, $y_k(\tau)$, $g_l(\bv{x}(\tau))$ processes, as described in more detail in the proof. \end{lem} \begin{proof} See Appendix G. \end{proof} The parameter $V>0$ will determine a performance-backlog tradeoff, as in \cite{neely-thesis}\cite{now}\cite{neely-energy-it}\cite{neely-fairness-infocom05}. \subsection{The Drift-Plus-Penalty Algorithm} Consider now the following algorithm, defined in terms of given positive parameters $C>0$ and $V>0$. Every slot $t$, observe the current $\omega(t)$ and the current actual and virtual queues $Q_k(t)$, $Z_l(t)$, and choose a control action $\alpha(t) \in \script{A}_{\omega(t)}$ that comes within $C$ of minimizing the following expression: \begin{eqnarray*} Vf(\hat{\bv{x}}(\alpha(t), \omega(t))) + \sum_{l=1}^LZ_l(t)g_l(\hat{\bv{x}}(\alpha(t), \omega(t)) \\ + \sum_{k=1}^KQ_k(t)[\hat{y}_k(\alpha(t), \omega(t)) - \hat{b}_k(\alpha(t), \omega(t))] \end{eqnarray*} This algorithm is designed to come within an additive constant $C$ of minimizing the right hand side of (\ref{eq:drift}) over all actions $\alpha(t) \in \script{A}_{\omega(t)}$ that can be made, given the current queue states $\bv{\Theta}(t)$. We call such a policy a \emph{$C$-approximation}. Note that a $0$-approximation is one that achieves the exact infimum on the right hand side of (\ref{eq:drift}). The notion of a $C$-approximation is introduced in case the infimum cannot be achieved, or when the infimum is difficult to achieve exactly. \begin{lem} \label{lem:drift2} Suppose we use a $C$-approximation every slot. Then for any time $t$, any integer $T>0$, and any $\bv{\Theta}(t)$, we have: \begin{eqnarray} && \Delta_T(\bv{\Theta}(t)) + V\sum_{\tau=t}^{t+T-1}\expect{f(\bv{x}(\tau))|\bv{\Theta}(t)} \leq \nonumber \\ && CT + T^2\expect{\hat{B}|\bv{\Theta}(t)} \nonumber \\ && + T(T-1)\expect{\hat{D}|\bv{\Theta}(t)} + V\sum_{\tau=t}^{t+T-1}\expect{f(\bv{x}^*(\tau))|\bv{\Theta}(t)} \nonumber \\ && + \sum_{k=1}^KQ_k(t)\sum_{\tau=t}^{t+T-1}\expect{a_k(\tau) + y_k^*(\tau) - b_k^*(\tau)|\bv{\Theta}(t)}\nonumber \\ && + \sum_{l=1}^LZ_l(t)\sum_{\tau=t}^{t+T-1}\expect{g_l(\bv{x}^*(\tau))|\bv{\Theta}(t)} \label{eq:drift2} \end{eqnarray} where $\bv{x}^*(\tau)$, $y_k^*(\tau)$, $b_k^*(\tau)$ are values that correspond to any alternative policy for choosing $\alpha^*(\tau) \in \script{A}_{\omega(\tau)}$: \begin{eqnarray*} \bv{x}^*(\tau) &\mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}}& \hat{\bv{x}}(\alpha^*(\tau), \omega(\tau)) \\ y_k^*(\tau) &\mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}}& \hat{y}_k(\alpha^*(\tau), \omega(\tau)) \\ b_k^*(\tau) &\mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}}& \hat{b}_k(\alpha^*(\tau), \omega(\tau)) \end{eqnarray*} and where $\hat{D}$ is a random variable that satisfies: \[ \expect{\hat{D}} \leq D \] where $D$ is a finite constant related to the worst case second moments of $a_k(t)$, $b_k(t)$, $y_k(t)$, $g_l(\bv{x}(t))$, described in more detail in the proof. \end{lem} \begin{proof} See Appendix F. \end{proof} \subsection{Algorithm Performance} In the following theorems, we assume the set $\Lambda$ is non-empty and has non-empty interior. We say that a non-negative rate vector $\bv{\lambda}$ is interior to $\Lambda$ if $\bv{\lambda} \in \Lambda$ and if there exists a value $d_{max}>0$ such that $\bv{\lambda} + \bv{d}_{max} \in \Lambda$, where $\bv{d}_{max} = (d_{max}, d_{max}, \ldots, d_{max})$ is a vector with all entries equal to $d_{max}$. We also assume the initial condition of the queues satisfies $\expect{L(\bv{\Theta}(0))} < \infty$.\footnote{Note that $\expect{L(\bv{\Theta}(0))} = 0$ if all queues are initially empty with probability 1.} In the case when $L=0$, so that there are no $g_l(\cdot)$ constraints, we only require $\bv{\lambda}$ to be an interior point of $\Lambda$. However, if $L>0$ we need a stronger assumption, related to a Slater-type condition of static optimization theory \cite{bertsekas-nonlinear}. \emph{Assumption A1:} There is a constant $d_{max}>0$ and an $\omega$-only policy that yields for all $l \in \{1, \ldots, L\}$, $k \in \{1, \ldots, K\}$: \begin{eqnarray} g_l(\expect{\hat{\bv{x}}(\alpha^*(t), \omega(t))}) \leq -d_{max}/2 \label{eq:a1-1} \\ \lambda_k + \expect{\hat{y}_k(\alpha^*(t), \omega(t)) - \hat{b}_k(\alpha^*(t), \omega(t))} \leq -d_{max}/2 \label{eq:a1-2} \end{eqnarray} It is clear from Theorem \ref{thm:omega-only} that Assumption A1 holds whenever $\bv{\lambda} + \bv{d}_{max} \in \Lambda$ and when $L=0$. This is because if $\bv{\lambda} + \bv{d}_{max} \in \Lambda$, then for any $\epsilon>0$, Theorem \ref{thm:omega-only} implies the existence of an $\omega$-only policy $\alpha^*(t)$ that satisfies for all $k \in \{1, \ldots, K\}$: \[ \lambda_k + d_{max} + \expect{\hat{y}_k(\alpha^*(t), \omega(t)) - \hat{b}_k(\alpha^*(t), \omega(t))} \leq \epsilon \] and hence we can simply choose $\epsilon=d_{max}/2$ to yield (\ref{eq:a1-2}) (note that (\ref{eq:a1-1}) is irrelevant in the case $L=0$). \begin{thm} \label{thm:performance1} Suppose Assumption A1 holds. Suppose we use a fixed parameter $V\geq 0$, and we implement a $C$-approximation every slot $t$. Suppose the system satisfies the decaying memory property of Section \ref{section:decaying-memory} and the boundedness assumptions of Section \ref{section:second-moment}. Then all actual and virtual queues are mean rate stable and so: \begin{eqnarray} \limsup_{t\rightarrow\infty} g_l(\overline{\bv{x}}(t)) \leq 0 & \forall l \in \{1, \ldots, L\} \label{eq:achieve1} \\ \lim_{t\rightarrow\infty} \expect{Q_k(t)/t} = 0 & \forall k \in \{1, \ldots, K\} \label{eq:achieve2} \end{eqnarray} Therefore, all desired constraints of problem (\ref{eq:opt1})-(\ref{eq:opt3}) are satisfied. Further, all queues are strongly stable and: \begin{eqnarray} \limsup_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1}\left[\sum_{k=1}^K\expect{Q_k(\tau)} + \sum_{l=1}^L\expect{Z_l(\tau)} \right] \leq \nonumber \\ \frac{[C + TB + (T-1)D + V(f_{max} - f_{min}) ]}{d_{max}/4} \label{eq:backlog-bound} \end{eqnarray} where $T$ is a positive integer related to $d_{max}$ and independent of $V$, and $B$ and $D$ are constants defined in Lemmas \ref{lem:driftyy} and \ref{lem:drift2}. Further, for any $\epsilon$ such that $0 < \epsilon \leq d_{max}/4$, there is a positive integer $T_{\epsilon}$, independent of $V$, such that: \begin{eqnarray} \limsup_{t\rightarrow\infty} f(\overline{\bv{x}}(\tau)) &\leq& f^{opt} + c_0\epsilon \nonumber \\ && + \frac{C + BT_{\epsilon} + D(T_{\epsilon}-1)}{V} \label{eq:f1} \end{eqnarray} where $c_0$ is defined: \[ c_0 \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} 4f_{max}/d_{max} + 1 \] The constants $T$, $T_{\epsilon}$ are related to the amount of time required for the memory to decay to a suitable proximity to a stationary distribution, as defined in the proof. \end{thm} Theorem \ref{thm:performance1} shows that the algorithm makes all queues mean rate stable and satisfies all desired constraints for any $V\geq 0$ (including $V=0$). The parameter $V$ is useful because the achieved cost can be pushed to its optimal value $f^{opt}$ as $V\rightarrow \infty$, as shown by (\ref{eq:f1}). Hence, we can ensure the achieved cost is arbitrarily close to the optimum by choosing $V$ suitably large. While a larger value of $V$ does not affect the constraints (\ref{eq:achieve1}), (\ref{eq:achieve2}), it turns out that it creates a larger \emph{convergence time} over which time averages are close to meeting their constraints. It also affects the average queue backlog sizes, so that the average backlog bound is $O(V)$, as shown in (\ref{eq:backlog-bound}). In the i.i.d. case, it is known that the difference between the achieved cost and the optimal cost $f^{opt}$ is $O(1/V)$, which establishes an $[O(1/V), O(V)]$ performance-backlog tradeoff \cite{neely-thesis}\cite{neely-fairness-infocom05}\cite{neely-energy-it}\cite{now}. For the non-i.i.d. case, for a given $\epsilon>0$, the bound (\ref{eq:f1}) shows that average cost is within $O(1/V)$ of $f^{opt} + c_0\epsilon$. However, the coefficient multiplier in the $O(1/V)$ term is linear in $T_{\epsilon}$, representing a ``mixing time'' required for time averages to be within $O(\epsilon)$ of their stationary averages. If we choose $\epsilon = 1/V$, then this mixing time itself can be a function of $V$ (we typically expect $T_{\epsilon} = O(\log(1/\epsilon)) = O(\log(V))$ if the network events are driven by a finite state Markov chain). This would present an $[O(\log(V)/V), O(V)]$ performance-delay tradeoff. Such a tradeoff is explicitly shown in \cite{rahul-cognitive-tmc}\cite{longbo-profit-allerton07} for particular types of networks, where a worst case backlog bound of $O(V)$ is also derived. Work in \cite{neely-mesh} treats a mobile network with non-ergodic mobility, defines an ``instantaneous capacity region,'' and shows (using an analysis similar to i.i.d. analysis) that achieved utility is within $O(1/V)$ of the sum of optimal utilities associated with each instantaneous region. Our work \cite{neely-mesh} also states an extension (without proof) that for ergodic mobility, the achieved utility is within $B_{mobile}/V$ of the optimum, where $B_{mobile}$ is a constant associated with the timescales of the mobility process, although it does not compute this constant. This statement is in Theorem 1 part (f) of \cite{neely-mesh}. The result (\ref{eq:f1}) above shows the constant $B_{mobile}$ can be defined for a given $\epsilon$ as: \[ B_{mobile} = c_0 \epsilon V + C + BT_{\epsilon} + D(T_{\epsilon}-1)\] Defining $\epsilon \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} 1/V$ removes dependence on $V$ in the first term, but there is still dependence on $V$ in the terms that are linear in $T_{\epsilon}$. Typically, $T_{1/V}$ is $O(\log(V))$ for systems defined on finite state ergodic Markov chains. However, our statement in part (f) of \cite{neely-mesh} claims a constant $B_{mobile}$ can be found that is \emph{independent of $V$} (yielding an $O(1/V)$ distance to optimality, rather than an $O(\log(V)/V)$ distance). This is indeed the case, although it requires the $\omega(t)$ process to be either i.i.d. over slots, or to be defined by an ergodic Markov chain with a finite state space.\footnote{We note that our statement in part (f) of \cite{neely-mesh} says that for ergodic mobility, a constant $B_{mobile}$ can be found that is independent of $V$. This should have been stated more precisely for mobility defined by a ``finite state ergodic Markov chain.'' We regret this ambiguity. The full proof of this result is due to Longbo Huang and uses a variable frame drift analysis that is slightly different from the $T$-slot drift analysis suggested in the proof outline in \cite{neely-mesh}. The details are in preparation.} \begin{proof} (Theorem \ref{thm:performance1}) By Assumption A1 there is an $\omega$-only policy $\alpha^*(t)$ such that for all $k \in \{1, \ldots, K\}$, $l \in \{1, \ldots, L\}$: \begin{eqnarray} \expect{g_l(\hat{\bv{x}}(\alpha^*(t), \omega(t)))} \leq -d_{max}/2 \label{eq:oo-t1} \\ \lambda_k + \expect{\hat{y}_k(\alpha^*(t), \omega(t)) - \hat{b}_k(\alpha^*(t), \omega(t))} \leq -d_{max}/2 \label{eq:oo-t2} \end{eqnarray} where we have used the fact that $g_l(\cdot)$ is linear or affine to pass the expectation through this function in (\ref{eq:oo-t1}). Fix $\delta\mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} d_{max}/4$, and choose a frame size $T>0$ that satisfies the decaying memory properties (\ref{eq:dm1})-(\ref{eq:dm3}) for this $\delta$ and this $\omega$-only policy $\alpha^*(t)$. The value of $T$ depends on $\delta$, and so we could write $T_{\delta}$, although we use $T$ below for notational simplicity (we note the dependence on $\delta$ again when needed). By Lemma \ref{lem:drift2} we have: \begin{eqnarray} && \Delta_T(\bv{\Theta}(t)) + V\sum_{\tau=t}^{t+T-1}\expect{f(\bv{x}(\tau))|\bv{\Theta}(t)} \leq CT \nonumber \\ && + T^2\expect{\hat{B}|\bv{\Theta}(t)} + T(T-1)\expect{\hat{D}|\bv{\Theta}(t)} \nonumber \\ && + V\sum_{\tau=t}^{t+T-1}\expect{f(\bv{x}^*(\tau))|\bv{\Theta}(t)} \nonumber \\ && + \sum_{k=1}^KQ_k(t)\sum_{\tau=t}^{t+T-1}\expect{a_k(\tau) + y_k^*(\tau) - b_k^*(\tau)|\bv{\Theta}(t)}\nonumber \\ && + \sum_{l=1}^LZ_l(t)\sum_{\tau=t}^{t+T-1}\expect{g_l(\bv{x}^*(\tau))|\bv{\Theta}(t)} \label{eq:drift3} \end{eqnarray} By the decaying memory property, the conditional expectations given $\bv{\Theta}(t)$ are within $\delta = d_{max}/4$ of their stationary averages, and so (by using (\ref{eq:oo-t1})-(\ref{eq:oo-t2})): \begin{eqnarray} && \Delta_T(\bv{\Theta}(t)) + V\sum_{\tau=t}^{t+T-1}\expect{f(\bv{x}(\tau))|\bv{\Theta}(t)} \leq CT \nonumber \\ && + T^2\expect{\hat{B}|\bv{\Theta}(t)} + T(T-1)\expect{\hat{D}|\bv{\Theta}(t)} + VTf_{max} \nonumber \\ && - \sum_{k=1}^KQ_k(t)Td_{max}/4- \sum_{l=1}^LZ_l(t)Td_{max}/4 \label{eq:drift4} \end{eqnarray} Rearranging terms yields: \begin{eqnarray*} \Delta_T(\bv{\Theta}(t)) +\frac{Td_{max}}{4}\left[\sum_{k=1}^KQ_k(t)+ \sum_{l=1}^LZ_l(t)\right] \leq \\ CT + T^2\expect{\hat{B}|\bv{\Theta}(t)} + T(T-1)\expect{\hat{D}|\bv{\Theta}(t)} \\ + VT(f_{max}- f_{min}) \end{eqnarray*} Taking expectations of both sides and using the law of iterated expectations yields: \begin{eqnarray*} \expect{L(\bv{\Theta}(t+T))} - \expect{L(\bv{\Theta}(t))} \\ + \frac{Td_{max}}{4}\expect{\sum_{k=1}^KQ_k(t) + \sum_{l=1}^LZ_l(t)} \\ \leq CT + T^2B +T(T-1)D + VT(f_{max} - f_{min}) \end{eqnarray*} The above holds for all slots $t \geq 0$. Define $t_i = t_0 + iT$ for $i \in \{0, 1, 2, \ldots\}$, where $t_0\in \{0, 1, \ldots, T-1\}$. We thus have: \begin{eqnarray*} \expect{L(\bv{\Theta}(t_{i+1}))} - \expect{L(\bv{\Theta}(t_i))} \\ + \frac{Td_{max}}{4}\left[\sum_{k=1}^K\expect{Q_k(t_i)} + \sum_{l=1}^L\expect{Z_l(t_i)}\right] \\ \leq CT + T^2B + T(T-1)D + VT(f_{max} - f_{min}) \end{eqnarray*} Summing over $i \in \{0, \ldots, J-1\}$ yields: \begin{eqnarray*} \expect{L(\bv{\Theta}(t_{J}))} - \expect{L(\bv{\Theta}(t_0))} \\ + \frac{Td_{max}}{4}\sum_{i=0}^{J-1}\left[\sum_{k=1}^K\expect{Q_k(t_i)} + \sum_{l=1}^L\expect{Z_l(t_i)}\right] \\ \leq J[CT + T^2B + T(T-1)D+ VT(f_{max} - f_{min}) ] \end{eqnarray*} Rearranging terms and using the fact that $\expect{L(\bv{\Theta}(t_J))} \geq 0$ yields: \begin{eqnarray*} \sum_{i=0}^{J-1}\left[\sum_{k=1}^K\expect{Q_k(t_i)} + \sum_{l=1}^L\expect{Z_l(t_i)}\right] \\ \leq \frac{J[CT + BT^2 + T(T-1)D + VT(f_{max} - f_{min}) ]}{Td_{max}/4} \\ + \frac{\expect{L(\bv{\Theta}(t_0))}}{Td_{max}/4} \end{eqnarray*} Summing over all $t_0\in\{0, 1, \ldots, T-1\}$ and dividing by $JT$ yields: \begin{eqnarray*} \frac{1}{JT}\sum_{\tau=0}^{JT-1}\left[\sum_{k=1}^K\expect{Q_k(\tau)} + \sum_{l=1}^L\expect{Z_l(\tau)}\right] \\ \leq \frac{[C + TB + (T-1)D + V(f_{max} - f_{min}) ]}{d_{max}/4} \\ + \frac{1}{JT}\sum_{t_0=0}^{T-1}\frac{\expect{L(\bv{\Theta}(t_0))}}{Td_{max}/4} \end{eqnarray*} The above holds for all positive integers $J$. Taking a $\limsup$ as $J\rightarrow\infty$ yields:\footnote{While the above only samples the time average expectation over slots that are multiples of $T$, it is easy to see that the $\limsup$ time average over all slots is the same, as the queue cannot change by much over the course of $T$ slots.} \begin{eqnarray*} \limsup_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1}\left[\sum_{k=1}^K\expect{Q_k(\tau)} + \sum_{l=1}^L\expect{Z_l(\tau)} \right] \leq \\ \frac{[C + TB + (T-1)D + V(f_{max} - f_{min}) ]}{d_{max}/4} \\ \end{eqnarray*} This proves (\ref{eq:backlog-bound}), and hence proves strong stability of all queues. By Theorem \ref{thm:strong-stability}, since the second moments of the arrival and service processes for all queues are bounded, we know mean rate stability also holds for all queues. Because queues $Z_l(t)$ are mean rate stable, and these queues have update equation (\ref{eq:z-dynamics}), we know from Theorem \ref{thm:gen-nec-rate} that: \[ \limsup_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{g_l(\bv{x}(\tau))} \leq 0 \] Passing the expectation through the linear function $g_l(\bv{x})$ proves (\ref{eq:achieve1}). It remains only to prove (\ref{eq:f1}). Fix $\epsilon>0$, and assume that $\epsilon< d_{max}/4$. Note that Theorem \ref{thm:omega-only} implies the existence of an $\omega$-only algorithm $\alpha'(t)$ that satisfies: \begin{eqnarray} \expect{g_l(\hat{\bv{x}}(\alpha'(t), \omega(t)))} \leq \epsilon \label{eq:a2-1} \\ \lambda_k + \expect{\hat{y}_k(\alpha'(t), \omega(t)) - \hat{b}_k(\alpha^*(t), \omega(t))} \leq \epsilon \label{eq:a2-2} \\ \expect{f(\hat{\bv{x}}(\alpha'(t), \omega(t)))} \leq f^{opt} + \epsilon \label{eq:a2-3} \end{eqnarray} where we have used the fact that $f(\cdot)$, $g_l(\cdot)$ are linear or affine to pass expectations through them. Now define an $\omega$-only policy $\alpha^{\star}(t)$ as follows: \[ \alpha^{\star}(t) = \left\{ \begin{array}{ll} \alpha^*(t) &\mbox{ with probability $\theta$} \\ \alpha'(t) & \mbox{ with probability $1-\theta$} \end{array} \right.\] where $\alpha^*(t)$ is the algorithm of Assumption A1, and where $\theta$ is defined: \begin{equation} \label{eq:theta} \theta \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} 4\epsilon/d_{max} \end{equation} Note that $\theta$ is a valid probability because $0 < \epsilon \leq d_{max}/4$. Therefore, under policy $\alpha^{\star}(t)$ we have (combining (\ref{eq:a2-1})-(\ref{eq:a2-3}) and (\ref{eq:a1-1})-(\ref{eq:a1-2})): \begin{eqnarray} \expect{g_l(\hat{\bv{x}}(\alpha^{\star}(t), \omega(t)))} \leq \nonumber \\ (1-\theta)\epsilon -\theta d_{max}/2 \leq -\epsilon \label{eq:a3-1} \\ \lambda_k + \expect{\hat{y}_k(\alpha^{\star}(t), \omega(t)) - \hat{b}_k(\alpha^*(t), \omega(t))} \nonumber \\ \leq (1-\theta)\epsilon -\theta d_{max}/2 \leq -\epsilon \label{eq:a3-2} \\ \expect{f(\hat{\bv{x}}(\alpha^{\star}(t), \omega(t)))} \leq (1-\theta)f^{opt} + \theta f_{max} \label{eq:a3-3} \end{eqnarray} where we have used the fact that $\theta = 4\epsilon/d_{max}$ to conclude that: \[ \theta d_{max}/2 = 2\epsilon \] Now fix $\delta= \epsilon$, and define $T_{\epsilon}$ as the value that satisfies the decaying memory properties (\ref{eq:dm1})-(\ref{eq:dm3}) for this $\delta$ and for the $\omega$-only policy $\alpha^{\star}(t)$. By Lemma \ref{lem:drift2}: \begin{eqnarray} && \Delta_{T_{\epsilon}}(\bv{\Theta}(t)) + V\sum_{\tau=t}^{t+T_{\epsilon}-1}\expect{f(\bv{x}(\tau))|\bv{\Theta}(t)} \leq CT \nonumber \\ && + T_{\epsilon}^2\expect{\hat{B}|\bv{\Theta}(t)} + T_{\epsilon}(T_{\epsilon}-1)\expect{\hat{D}|\bv{\Theta}(t)} \nonumber \\ && + V\sum_{\tau=t}^{t+T_{\epsilon}-1}\expect{f(\bv{x}^{\star}(\tau))|\bv{\Theta}(t)} \nonumber \\ && + \sum_{k=1}^KQ_k(t)\sum_{\tau=t}^{t+T_{\epsilon}-1}\expect{a_k(\tau) + y_k^{\star}(\tau) - b_k^{\star}(\tau)|\bv{\Theta}(t)}\nonumber \\ && + \sum_{l=1}^LZ_l(t)\sum_{\tau=t}^{t+T_{\epsilon}-1}\expect{g_l(\bv{x}^{\star}(\tau))|\bv{\Theta}(t)} \label{eq:drift5} \end{eqnarray} Noting that the decaying memory property ensures the above conditional expectations are within $\delta = \epsilon$ of their stationary averages (\ref{eq:a3-1})-(\ref{eq:a3-3}), we have: \begin{eqnarray*} && \Delta_{T_{\epsilon}}(\bv{\Theta}(t)) + V\sum_{\tau=t}^{t+T_{\epsilon}-1}\expect{f(\bv{x}(\tau))|\bv{\Theta}(t)} \leq \nonumber \\ && CT + T_{\epsilon}^2\expect{\hat{B}|\bv{\Theta}(t)} + T_{\epsilon}(T_{\epsilon}-1)\expect{\hat{D}|\bv{\Theta}(t)} \\ && + VT_{\epsilon}(f^{opt} + \theta f_{max} + \epsilon) \end{eqnarray*} Taking expectations gives: \begin{eqnarray*} && \expect{L(\bv{\Theta}(t+T_{\epsilon}))} - \expect{L(\bv{\Theta}(t))} \nonumber \\ && + V\sum_{\tau=t}^{t+T_{\epsilon}-1}\expect{f(\bv{x}(\tau))} \leq CT + T_{\epsilon}^2B+T_{\epsilon}(T_{\epsilon}-1)D \nonumber \\ && + VT_{\epsilon}(f^{opt} + \theta f_{max} + \epsilon) \end{eqnarray*} As before, we substitute $t=t_i$ for $i \in \{0, 1, 2, \ldots, \}$ for some value $t_0 \in \{0, 1, \ldots T-1\}$, and sum over $i \in \{0, 1, \ldots, J-1\}$ and $t_0 \in \{0, 1, \ldots, T-1\}$ to get: \begin{eqnarray*} \frac{V}{JT_{\epsilon}}\sum_{\tau=0}^{JT_{\epsilon}-1} \expect{f(\bv{x}(\tau))} \leq [C + T_{\epsilon}B+(T_{\epsilon}-1)D] \\ + V(f^{opt} + \theta f_{max} + \epsilon) + \frac{1}{JT_{\epsilon}}\sum_{t_0=0}^{T_{\epsilon}-1} \expect{L(\bv{\Theta}(t_0))} \end{eqnarray*} Dividing by $V$ and taking a limit as $J\rightarrow\infty$ yields: \begin{eqnarray*} \limsup_{t\rightarrow\infty} f(\overline{\bv{x}}(\tau)) \leq f^{opt} + \theta f_{max} + \epsilon \\ + \frac{C+ T_{\epsilon}B+(T_{\epsilon}-1)D}{V} \end{eqnarray*} where we have used the fact that $f(\bv{x})$ is linear or affine to pass the time average expectation through it. Using $\theta = 4\epsilon/d_{max}$ proves (\ref{eq:f1}). \end{proof} \section{Exercises} \label{section:exercise} \begin{exer} \label{ex:inequality-comparison} (Inequality comparison) Let $Q(t)$ satisfy (\ref{eq:q-dynamics}) with server process $b(t)$ and arrival process $a(t)$. Let $\tilde{Q}(t)$ be another queueing system with the same server process $b(t)$ but with an arrival process $\tilde{a}(t) = a(t) + z(t)$, where $z(t) \geq 0$ for all $t \in \{0, 1, 2, \ldots\}$. Assuming that $Q(0) = \tilde{Q}(0)$, prove that $Q(t) \leq \tilde{Q}(t)$ for all $t \in \{0, 1, 2, \ldots\}$. \end{exer} \begin{exer} \label{ex:rate-stable} (Proving sufficiency for Theorem \ref{thm:rate-stability}a) Let $Q(t)$ satisfy (\ref{eq:q-dynamics}) with arrival and server processes with well defined time averages $a_{av}$ and $b_{av}$. Suppose that $a_{av} \leq b_{av}$. Fix $\epsilon>0$, and define $Q_{\epsilon}(t)$ as a queue with $Q_{\epsilon}(0) = Q(0)$, and with the same server process $b(t)$ but with an arrival process $\tilde{a}(t) = a(t) + (b_{av} - a_{av}) + \epsilon$ for all $t$. a) Compute the time average of $\tilde{a}(t)$. b) Assuming the result of Theorem \ref{thm:rate-stability}b, compute $\lim_{t\rightarrow\infty} Q_{\epsilon}(t)/t$. c) Use the result of part (b) and Exercise \ref{ex:inequality-comparison} to prove that $Q(t)$ is rate stable. \end{exer} \begin{exer} \label{ex:rate-stability-b} (Proof of Theorem \ref{thm:rate-stability}b) Let $Q(t)$ be a queue that satisfies (\ref{eq:q-dynamics}). Assume time averages of $a(t)$ and $b(t)$ are given by finite constants $a_{av}$ and $b_{av}$, respectively. a) Use the following equation to prove that $\lim_{t\rightarrow \infty} a(t)/t = 0$ with probability 1: \[ \frac{1}{t+1} \sum_{\tau=0}^{t} a(\tau) = \left(\frac{t}{t+1}\right)\frac{1}{t}\sum_{\tau=0}^{t-1} a(\tau) + \left(\frac{t}{t+1}\right)\frac{a(t)}{t} \] b) Suppose that $\tilde{b}(t_i) < b(t_i)$ for some slot $t_i$. Use (\ref{eq:q-dynamics}) to compute $Q(t_i+1)$. c) Use part (b) to show that if $\tilde{b}(t_i) < b(t_i)$, then: \[ a(t_i) \geq Q(0) + \sum_{\tau=0}^{t_i} [a(\tau) - b(\tau)] \] Conclude that if $\tilde{b}(t_i) < b(t_i)$ for an infinite number of slots $t_i$, then $a_{av} \leq b_{av}$. d) Use part (c) to conclude that if $a_{av} > b_{av}$, there is some slot $t^*\geq 0$ such that for all $t \geq t^*$ we have: \[ Q(t) = Q(t^*) + \sum_{\tau=t^*}^{t-1} [a(\tau) - b(\tau)] \] Use this to prove the result of Theorem \ref{thm:rate-stability}b. \end{exer} \begin{exer} \label{ex:strong-stability-implies-steady-state} (Strong stability implies steady state stability) Prove that strong stability implies steady state stability using the fact that $\expect{Q(\tau)} \geq MPr[Q(\tau)>M]$. \end{exer} \section*{Appendix A --- Proof of Theorem \ref{thm:rs-implies-mrs}} Here we prove Theorem \ref{thm:rs-implies-mrs}. Note that rate stability implies that for any $\delta>0$, we have:\footnote{In fact, the result of parts (a) and (b) of Theorem \ref{thm:rs-implies-mrs} hold equally if the assumption that $Q(t)$ is rate stable is replaced by the weaker assumption (\ref{eq:rs-weaker}).} \begin{eqnarray} \lim_{t\rightarrow\infty}Pr[Q(t)/t>\delta] = 0 \label{eq:rs-weaker} \end{eqnarray} For a given $\delta>0$, define the event $\script{E}_t \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} \{Q(t)/t > \delta\}$, so that $\lim_{t\rightarrow\infty} Pr[\script{E}_t]=0$. Define $\script{E}_t^c \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} \{Q(t)/t \leq \delta\}$. \begin{lem} \label{lem:prelim-limit} If $Q(t)$ is non-negative and satisfies (\ref{eq:rs-weaker}), and if: \begin{equation} \label{eq:tail-cond} \lim_{t\rightarrow\infty} \expect{Q(t)/t|\script{E}_t}Pr[\script{E}_t] = 0 \end{equation} then $Q(t)$ is mean rate stable. \end{lem} \begin{proof} We have for a given $\delta>0$: \begin{eqnarray*} 0 \leq \expect{Q(t)/t} &=& \expect{Q(t)/t|\script{E}_t^c}Pr[\script{E}_t^c] \\ && + \expect{Q(t)/t|\script{E}_t}Pr[\script{E}_t] \\ &\leq& \delta + \expect{Q(t)/t|\script{E}_t}Pr[\script{E}_t] \end{eqnarray*} Taking a $\limsup$ of both sides and using (\ref{eq:tail-cond}) yields: \[ 0 \leq \limsup_{t\rightarrow\infty} \expect{Q(t)/t} \leq \delta \] This holds for all $\delta>0$. Thus, $\lim_{t\rightarrow\infty} \expect{Q(t)/t} = 0$, proving mean rate stability. \end{proof} Thus, to prove Theorem \ref{thm:rs-implies-mrs}, it suffices to prove that (\ref{eq:tail-cond}) holds under the assumptions of parts (a) and (b) of the theorem. To this end, we have a preliminary lemma. \begin{lem} \label{lem:X-RV} If $X$ is a non-negative random variable such that $\expect{X^{1+\epsilon}} < \infty$ for some value $\epsilon>0$, then for any event $\script{E}$ with a well defined probability $Pr[\script{E}]$, we have: \[ \expect{X|\script{E}}Pr[\script{E}] \leq \expect{X^{1+\epsilon}}^{1/(1+\epsilon)}Pr[\script{E}]^{\epsilon/(1+\epsilon)} \] \end{lem} \begin{proof} If $Pr[\script{E}]=0$, then the result is obvious. Suppose now that $Pr[\script{E}]>0$. We have: \begin{eqnarray*} \expect{X^{1+\epsilon}} &=& \expect{X^{1+\epsilon}|\script{E}}Pr[\script{E}] + \expect{X^{1+\epsilon}|\script{E}^c}Pr[\script{E}^c] \\ &\geq& \expect{X^{1+\epsilon}|\script{E}}Pr[\script{E}] \end{eqnarray*} Therefore: \[ \expect{X^{1+\epsilon}|\script{E}} \leq \frac{\expect{X^{1+\epsilon}}}{Pr[\script{E}]} \] However, by Jensen's inequality for the convex function $f(x) = x^{1+\epsilon}$ for $x \geq 0$, we have: \[ \expect{X|\script{E}}^{1+\epsilon} \leq \expect{X^{1+\epsilon}|\script{E}} \] Thus: \[ \expect{X|\script{E}}^{1+\epsilon} \leq \frac{\expect{X^{1+\epsilon}}}{Pr[\script{E}]} \] Hence: \[ \expect{X|\script{E}} \leq \left(\frac{\expect{X^{1+\epsilon}}}{Pr[\script{E}]}\right)^{1/(1+\epsilon)}\] Multiplying both sides by $Pr[\script{E}]$ proves the result. \end{proof} We now prove part (a) of Theorem \ref{thm:rs-implies-mrs}. \begin{proof} (Theorem \ref{thm:rs-implies-mrs}(a)) For simplicity assume that $Q(0) = 0$. Note that: \begin{equation*} \frac{Q(t)}{t} \leq \frac{1}{t}\sum_{\tau=0}^{t-1} [a(\tau) + b^-(\tau)] \end{equation*} Define $X(\tau) \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} a(\tau) + b^-(\tau)$. Thus: \begin{equation} \label{eq:rs-to-mrs} \frac{Q(t)}{t} \leq \frac{1}{t}\sum_{\tau=0}^{t-1}X(\tau) \end{equation} Now suppose there are constants $\epsilon>0$, $C>0$ such that: \begin{equation} \label{eq:c-bound} \expect{X(\tau)^{1+\epsilon}} \leq C \: \: \mbox{ for all $\tau$} \end{equation} Fix $\delta>0$ and define the event $\script{E}_t \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} \{Q(t)/t >\delta\}$. Thus: \begin{eqnarray} \expect{\frac{Q(t)}{t}|\script{E}_t}Pr[\script{E}_t] &\leq& \frac{1}{t}\sum_{\tau=0}^{t-1}\expect{X(\tau)|\script{E}_t}Pr[\script{E}_t] \label{eq:reuse-rs} \\ &\leq& \frac{1}{t}\sum_{\tau=0}^{t-1} C^{1/(1+\epsilon)} Pr[\script{E}_t]^{\epsilon/(1+\epsilon)} \nonumber \\ &=& C^{1/(1+\epsilon)}Pr[\script{E}_t]^{\epsilon/(1+\epsilon)} \nonumber \end{eqnarray} where the first inequality follows by (\ref{eq:rs-to-mrs}) and the second inequality uses Lemma \ref{lem:X-RV} together with (\ref{eq:c-bound}). Taking a limit of the above and using the fact that $Pr[\script{E}_t]\rightarrow 0$ yields: \[ \lim_{t\rightarrow\infty} \expect{Q(t)/t|\script{E}_t}Pr[\script{E}_t] = 0 \] and therefore $Q(t)$ is mean rate stable by Lemma \ref{lem:prelim-limit}. \end{proof} To prove part (b) of Theorem \ref{thm:rs-implies-mrs}, we need another preliminary lemma. \begin{lem} \label{lem:prelim2} If $X$ is a non-negative random variable and $\script{E}$ is any event with a well defined probability $Pr[\script{E}]$, then for any $x > 0$ we have: \[ \expect{X|\script{E}}Pr[\script{E}] \leq \expect{X|X>x}Pr[X>x] + xPr[\script{E}] \] \end{lem} \begin{proof} Fix a value $x>0$. Define indicator functions $1_{\script{E}}$ and $1_{\{X > x\}}$ as follows: \begin{eqnarray*} 1_{\script{E}} &\mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}}& \left\{ \begin{array}{ll} 1&\mbox{ if event $\script{E}$ is true} \\ 0 & \mbox{ otherwise} \end{array} \right. \\ 1_{\{X>x\}} &\mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}}& \left\{ \begin{array}{ll} 1 &\mbox{ if $X > x$} \\ 0 & \mbox{ otherwise} \end{array} \right. \end{eqnarray*} Define $1_{\{X\leq x\}} \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} 1 - 1_{\{X>x\}}$. Then: \begin{eqnarray*} X1_{\script{E}} &=& X1_{\script{E}}(1_{\{X> x\}} + 1_{\{X\leq x\}}) \\ &=& X1_{\script{E}}1_{\{X> x\}} + X1_{\script{E}}1_{\{X\leq x\}} \\ &\leq& X1_{\{X > x\}} + x1_{\script{E}} \end{eqnarray*} Therefore: \[ \expect{X 1_{\script{E}}} \leq \expect{X 1_{\{X>x\}}} + xPr[\script{E}] \] Thus: \[ \expect{X|\script{E}}Pr[\script{E}] \leq \expect{X|X>x}Pr[X>x] + xPr[\script{E}] \] \end{proof} \begin{proof} (Theorem \ref{thm:rs-implies-mrs}(b)) Fix $\delta>0$ and define the event $\script{E}_t \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} \{Q(t)/t > \delta\}$. From (\ref{eq:reuse-rs}) we have: \begin{eqnarray} \expect{\frac{Q(t)}{t}|\script{E}_t}Pr[\script{E}_t] &\leq& \frac{1}{t}\sum_{\tau=0}^{t-1}\expect{X(\tau)|\script{E}_t}Pr[\script{E}_t] \label{eq:thm1b} \end{eqnarray} where we recall that $X(\tau) \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} a(\tau) + b^-(\tau)$. Now fix any (arbitrarily large) $x>0$. From Lemma \ref{lem:prelim2} we have: \begin{eqnarray*} && \hspace{-.3in} \expect{X(\tau)|\script{E}_t} Pr[\script{E}_t] \\ &\leq& \expect{X(\tau)|X>x}Pr[X(\tau)>x] + xPr[\script{E}_t] \\ &\leq& \expect{Y|Y>x}Pr[Y>x] + xPr[\script{E}_t] \end{eqnarray*} where the final equality has used the assumption about the random variable $Y$ in Theorem \ref{thm:rs-implies-mrs} part (b). Plugging this into (\ref{eq:thm1b}) yields: \begin{eqnarray*} \expect{\frac{Q(t)}{t}|\script{E}_t}Pr[\script{E}_t] \leq \expect{Y|Y>x}Pr[Y>x] \\ + \frac{x}{t}\sum_{\tau=0}^{t-1}Pr[\script{E}_t] \end{eqnarray*} Because $Pr[\script{E}_t]\rightarrow 0$ as $t\rightarrow \infty$, its time average also converges to $0$. Thus, taking a $\limsup$ of both sides of the above inequality as $t\rightarrow\infty$ yields: \begin{eqnarray*} \limsup_{t\rightarrow\infty} \expect{\frac{Q(t)}{t}|\script{E}_t} Pr[\script{E}_t] \leq \expect{Y|Y>x}Pr[Y>x] \end{eqnarray*} The above holds for all $x>0$. The fact that $\expect{Y}<\infty$ ensures that the right hand side of the above inequality vanishes as $x\rightarrow\infty$. Taking a limit as $x\rightarrow\infty$ thus proves: \[ \limsup_{t\rightarrow\infty} \expect{\frac{Q(t)}{t}|\script{E}_t}Pr[\script{E}_t] = 0 \] and hence: \[ \lim_{t\rightarrow\infty} \expect{\frac{Q(t)}{t}|\script{E}_t}Pr[\script{E}_t] = 0 \] This together with Lemma \ref{lem:prelim-limit} proves that $Q(t)$ is mean rate stable. \end{proof} \section*{Appendix B --- Proof of Theorem \ref{thm:strong-stability}(c)} Here we prove part (c) of Theorem \ref{thm:strong-stability}. The proof is similar to our previous proof in \cite{neely-downlink-ton}. \begin{proof} (Theorem \ref{thm:strong-stability} part (c)) Suppose there is a finite constant $C>0$ such that $\expect{b(t) - a(t)} \leq C$ for all $t$, and that $Q(t)$ is \emph{not} mean rate stable. It follows that there is an $\epsilon>0$ such that $\expect{Q(t_k)/t_k} \geq \epsilon$ for an infinite collection of times $t_k$. For any $t_k$ and any $t> t_k$ we have by (\ref{eq:io-b2}): \[ Q(t) \geq Q(t_k) - \sum_{\tau=t_k}^{t-1} [b(\tau) - a(\tau)] \] Thus, for any $t \geq t_k$ we have: \[ \expect{Q(t)} \geq \epsilon t_k - (t-t_k)C \] Now fix any (arbitrarily large) value $M>0$. Then for sufficiently large $k$ we have $\epsilon t_k > M$, and $\epsilon t_k - (t-t_k)C \geq M$ whenever: \begin{equation} \label{eq:t-cond} t_k \leq t \leq \frac{(C+\epsilon)t_k - M}{C} = (1+\epsilon/C)t_k - M/C \end{equation} Hence, $\expect{Q(t)} \geq M$ whenever (\ref{eq:t-cond}) holds. Define $\hat{t}_k$ as: \[ \hat{t}_k \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} \lfloor (1+\epsilon/C)t_k - M/C \rfloor \] The number of slots in the interval $t_k \leq t \leq \hat{t}_k$ given by (\ref{eq:t-cond}) is at least: \[ \hat{t}_k - t_k + 1 \geq \epsilon t_k/C - M/C \] It follows that: \[ \frac{1}{\hat{t}_k+1}\sum_{\tau=0}^{\hat{t}_k} \expect{Q(\tau)} \geq M\frac{\epsilon t_k/C - M/C}{(1+\epsilon/C)t_k - M/C+1} \] Taking a $\limsup$ as $k\rightarrow \infty$ and noting that $\lim_{k\rightarrow\infty} t_k = \infty$ yields: \[ \limsup_{k\rightarrow\infty} \frac{1}{\hat{t}_k+1}\sum_{\tau=0}^{\hat{t}_k} \expect{Q(\tau)} \geq \frac{M \epsilon/C}{1+\epsilon/C} \] This holds for arbitrarily large $M$. Hence, taking a limit as $M\rightarrow \infty$ yields: \[ \limsup_{k\rightarrow\infty} \frac{1}{\hat{t}_k+1}\sum_{\tau=0}^{\hat{t}_k} \expect{Q(\tau)} \geq \infty \] and thus $Q(t)$ is not strongly stable. It follows that strongly stable implies mean rate stable. A similar proof can be done for the case when $\expect{a(t) + b^{-}(t)} \leq C$ for all $t$. This can be shown by observing that for any $t<t_k$: \[ Q(t) \geq Q(t_k) - \sum_{\tau=t_k}^{t-1} [a(\tau) + b^{-}(\tau)] \] and hence: \[ \expect{Q(t)} \geq \epsilon t_k - C(t-t_k) \] \end{proof} \section*{Appendix C --- Proof of Theorem \ref{thm:strong-stability}(b)} Here we prove Theorem \ref{thm:strong-stability}(b), which shows that strong stability implies rate stability if certain boundedness assumptions are satisfied. Suppose $Q(t)$ has dynamics given by (\ref{eq:q-dynamics}), and that there is a finite constant $C>0$ such that with probability $1$, we have: \begin{equation} \label{eq:c-cond} b(t) - a(t) \leq C \: \: \forall t \in \{0, 1, 2, \ldots \} \end{equation} For simplicity, we assume the condition (\ref{eq:c-cond}) holds deterministically (so that we can neglect writing ``with probability 1.'') Suppose $Q(t)$ is strongly stable. We want to show that $\lim_{t\rightarrow\infty}Q(t)/t = 0$ with probability $1$. We prove the result through several preliminary lemmas, presented below. \begin{lem} \label{lem:square-root} If $Q(t)$ is strongly stable and if there is a finite constant $C>0$ such that (\ref{eq:c-cond}) holds for all $t$, then $\expect{Q(t)/t} \leq O(1/\sqrt{t})$. Specifically, there exists a finite constant $D>0$ and a positive timeslot $t_D$ such that \begin{equation} \label{eq:srthm1} \expect{Q(t)/t} \leq D/\sqrt{t} \: \: \mbox{ for all $t \geq t_D$} \end{equation} Hence, for all $t \geq t_D$ and all $\epsilon>0$ we have: \begin{equation} \label{eq:srthm2} Pr[Q(t)/t \geq \epsilon/4] \leq 4D/(\epsilon\sqrt{t}) \end{equation} \end{lem} \begin{proof} Because $Q(t)$ is strongly stable, there is a finite constant $B>0$ such that: \[ \limsup_{t\rightarrow\infty} \frac{1}{t} \sum_{\tau=0}^{t-1} \expect{Q(\tau)} < B < \infty \] Then for large $t$ (all $t \geq t^*$ for some $t^*$), we have: \[ \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{Q(\tau)} \leq B \] Suppose now that for any finite $D>0$, there exist arbitrarily large times $t_i$ such that $\expect{Q(t_i)/t_i} > D/\sqrt{t_i}$. We shall reach a contradiction. For $t_i \geq t^*$ we have: \begin{eqnarray} B \geq \frac{1}{2t_i}\sum_{\tau=0}^{2t_i-1} \expect{Q(\tau)} &\geq& \frac{1}{2}\sum_{\tau=t_i}^{2t_i-1} \expect{\frac{Q(\tau)}{t_i}} \nonumber \\ &\geq& \frac{1}{2}\sum_{\tau=t_i}^{2t_i-1} \expect{\frac{Q(\tau)}{\tau}} \label{eq:foop} \end{eqnarray} Because (\ref{eq:c-cond}) holds, we know that for all $\tau \geq t_i$: \[ Q(\tau) \geq Q(t_i) - C(\tau - t_i) \] Further, $\expect{Q(t_i)/t_i} > D/\sqrt{t_i}$ and so $\expect{Q(t_i)} > D\sqrt{t_i}$. We thus have for all $\tau \in \{t_i, \ldots, 2t_i - 1\}$): \[ \frac{\expect{Q(\tau)}}{\tau} > \frac{D\sqrt{t_i} -C(\tau-t_i)}{\tau} \] Now assume that $t_i$ is large, so that $2t_i - 1 \geq t_i + \lfloor D\sqrt{t_i}/(2C) \rfloor \geq t_i + D\sqrt{t_i}/(4C)$. Note that if: \[ \tau \in\{t_i, \ldots, t_i + \lfloor D\sqrt{t_i}/(2C)\rfloor\} \] then $\tau - t_i \leq D\sqrt{t_i}/(2C)$ and we know: \begin{eqnarray*} \frac{\expect{Q(\tau)}}{\tau} &>& \frac{D\sqrt{t_i} - C(\tau-t_i)}{\tau} \\ &\geq& \frac{D\sqrt{t_i} - C(\tau-t_i)}{2t_i} \\ &\geq& D/(4\sqrt{t_i}) \end{eqnarray*} It follows that: \[ \sum_{\tau=t_i}^{2t_i-1} \expect{Q(\tau)/\tau} \geq \left(\frac{D}{4\sqrt{t_i}}\right) \frac{D\sqrt{t_i}}{4C} = \frac{D^2}{16C} \] Therefore, for large $t_i$, from (\ref{eq:foop}) we have: \[ B \geq \frac{D^2}{32C} \] The above inequality must hold for all $D>0$. This clearly does not hold for $D> \sqrt{32 BC}$, yielding a contradiction and hence proving the result (\ref{eq:srthm1}). To prove (\ref{eq:srthm2}), we use the fact that (\ref{eq:srthm1}) holds to get for any time $t > t_D$: \begin{eqnarray*} D/\sqrt{t} \geq \expect{Q(t)/t} \geq (\epsilon/4)Pr[Q(t)/t \geq \epsilon/4] \end{eqnarray*} Dividing the above by $\epsilon/4$ yields (\ref{eq:srthm2}). \end{proof} Now again suppose $a(t)$ and $b(t)$ satisfies (\ref{eq:c-cond}) for all $t$. Fix $\epsilon>0$ and define a constant $\alpha >0$ as follows: \[ \alpha \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} \min\left[\frac{\epsilon}{2C}, \frac{1}{2} \right] \] Fix an integer time $t_0$ such that $t_0 \geq 1/\alpha$, and define the following sequence for $i \in \{0, 1, 2, \ldots\}$: \[ t_{i+1} = t_i + \lfloor \alpha t_i \rfloor \] Note that for all $i$ we have $t_{i+1} > t_i$ (because $\alpha t_i \geq 1$). Let the set of times in the interval $\{t_i, \ldots, t_i + \lfloor \alpha t_i \rfloor - 1\}$ denote the $i$th frame. \begin{lem} If $Q(t)/t \geq \epsilon$ for any time $t\geq t_0$ in the $i$th frame (for some $i \in \{0, 1, 2, \ldots\}$), then $Q(t_{i+1})/t_{i+1} \geq \epsilon/4$. \end{lem} \begin{proof} Fix $\epsilon>0$, and suppose $Q(t)/t \geq \epsilon$, where $t\geq t_0$ and $t$ is in the $i$th frame for some $i \in \{0, 1, 2, \ldots\}$. Then: \begin{equation} \label{eq:ti} t_i \leq t < t_{i+1} \leq t_i +\alpha t_i \end{equation} Hence: \begin{equation} \label{eq:lemgeom} t_{i+1} -t \leq \alpha t_i \end{equation} However: \[ Q(t_{i+1}) \geq Q(t) -C (t_{i+1} - t) \] Therefore (because $Q(t)/t \geq \epsilon$) we have: \begin{eqnarray} Q(t_{i+1}) &\geq& \epsilon t - C (t_{i+1} - t) \nonumber \\ &\geq& \epsilon t_i - C(t_{i+1} - t) \label{eq:ti1} \\ &\geq& \epsilon t_i - C \alpha t_i \label{eq:ti2} \\ &\geq& t_i\epsilon/2 \label{eq:ti3} \end{eqnarray} where (\ref{eq:ti1}) follows because $t \geq t_i$, (\ref{eq:ti2}) follows from (\ref{eq:lemgeom}), and (\ref{eq:ti3}) follows because $\alpha \leq \epsilon/(2C)$ (by definition of $\alpha$). Thus: \[ \frac{Q(t_{i+1})}{t_{i+1}} \geq \frac{\epsilon}{2}\left(\frac{t_i}{t_{i+1}}\right) \] However, from (\ref{eq:ti}) we have: \[ t_{i+1} - t_i \leq \alpha t_i \] and so: \[ 1 - (t_i/t_{i+1}) \leq \alpha (t_i/t_{i+1}) \leq \alpha \] Thus: \[ \frac{t_i}{t_{i+1}} \geq 1 - \alpha \geq 1/2 \] where the last inequality follows because $\alpha \leq 1/2$ (by definition of $\alpha$). Using this in (\ref{eq:ti3}) yields: \[ \frac{Q(t_{i+1})}{t_{i+1}} \geq \left(\frac{t_i}{t_{i+1}}\right)\epsilon/2 \geq \epsilon/4 \] This proves the result. \end{proof} \begin{lem} For each frame $i \in \{0, 1, 2, \ldots\}$ we have: \begin{eqnarray*} Pr[\mbox{$Q(t)/t \geq \epsilon$ for some $t$ in the $i$th frame}] \\ \leq Pr[Q(t_{i+1})/t_{i+1} \geq \epsilon/4] \end{eqnarray*} \end{lem} \begin{proof} This follows immediately from the previous Lemma. \end{proof} The remainder of the proof is similar to the standard proof of the strong law of large numbers (see, for example, \cite{billingsley}). We have for any $t_0$ that starts the frames: \begin{eqnarray} && \hspace{-.3in} Pr\left[\limsup_{t\rightarrow\infty} Q(t)/t \geq \epsilon\right] \nonumber \\ &\leq& Pr\left[\sup_{t\geq t_0} Q(t)/t \geq \epsilon\right] \nonumber \\ &\leq& \sum_{i=0}^{\infty} Pr[\mbox{$Q(\tau)/\tau \geq \epsilon$ for some $\tau$ in the $i$th frame}]\nonumber \\ &\leq& \sum_{i=0}^{\infty} Pr[Q(t_{i+1})/t_{i+1} \geq \epsilon/4] \label{eq:final-summ} \end{eqnarray} However, $t_i$ is an exponentially growing sequence (note that $t_{i+1} \geq (1+\alpha)t_i - 1$), and so we are sampling $Q(t)/t$ at exponentially increasing times. However, if $Q(t)$ is strongly stable, from Lemma \ref{lem:square-root} we know that $Pr[Q(t)/t \geq \epsilon/4] \leq O(1/\sqrt{t})$. Hence, the final sum in (\ref{eq:final-summ}) is summable and goes to zero as $t_0 \rightarrow \infty$. This proves that if $Q(t)$ is strongly stable, then for any $\epsilon>0$ we know: \[ Pr\left[\limsup_{t\rightarrow\infty} Q(t)/t \geq \epsilon\right] = 0\] Because this holds for all $\epsilon>0$, it must be the case that $Q(t)/t \rightarrow 0$ with probability 1 (and so the queue is rate stable). This proof considers the case when $b(t) - a(t) \leq C$ for all $t$. The other case when $a(t) + b^-(t) \leq C$ for all $t$ is proven similarly and is omitted for brevity. \section*{Appendix D --- Proof of Theorem \ref{thm:ss-implies-mrs}} Here we prove Theorem \ref{thm:ss-implies-mrs}. Suppose there is a finite constant $C$ such that $a(t) + b^-(t) \leq C$ with probability $1$ for all $t$. For simplicity, we assume that $Q(0) = 0$, and that $a(t) + b^-(t) \leq C$ deterministically (so that we do not need to repeat the phrase ``with probability 1''). Suppose that $Q(t)$ is \emph{not} mean rate stable. We show that it is not steady state stable. Because $Q(t)$ is not mean rate stable, there must be an $\epsilon>0$ and an infinite collection of increasing times $t_k$ such that $\expect{Q(t_k)/t_k} \geq \epsilon$ for all $k\in\{1,2, \ldots\}$ and $\lim_{k\rightarrow\infty} t_k = \infty$. Now fix an (arbitrarily large) value $M$. We have for any time $t \leq t_k$: \[ Q(t_k) \leq Q(t) + \sum_{\tau=t}^{t_k-1} [a(\tau) + b^-(\tau)] \leq Q(t) + C(t_k-t) \] Thus: \begin{equation} \expect{Q(t)} \geq \expect{Q(t_k)} - C(t_k - t) \geq \epsilon t_k - C(t_k-t) \label{eq:tm5} \end{equation} On the other hand, for $t \leq t_k$ we have: \begin{eqnarray*} \expect{Q(t)} &\leq& MPr[Q(t)\leq M] \\ && + \expect{Q(t)|Q(t)>M}Pr[Q(t)>M] \\ &\leq& M + CtPr[Q(t)>M] \\ &\leq& M + Ct_kPr[Q(t)>M] \end{eqnarray*} where we have used the fact that $Q(t) \leq Ct \leq Ct_k$ (because $Q(0) = 0$, the queue increases by at most $C$ on each slot, and $t \leq t_k$). Thus: \[ \expect{Q(t)} \leq M + Ct_kPr[Q(t)>M] \] Combining this with (\ref{eq:tm5}) yields: \[ M + Ct_kPr[Q(t)>M] \geq \epsilon t_k - C(t_k-t) \] Therefore, for all $t \leq t_k$ we have: \begin{equation} \label{eq:t5a} Pr[Q(t)>M] \geq \frac{\epsilon t_k - C(t_k-t) - M}{Ct_k} \end{equation} Now suppose that: \begin{equation} \label{eq:t-cond2} 0 \leq (t_k-t) \leq (\epsilon/2C)t_k \end{equation} It follows from (\ref{eq:t-cond2}) that: \[ C(t_k-t) \leq (\epsilon/2) t_k \] Using this in (\ref{eq:t5a}) gives: \[ Pr[Q(t)>M] \geq \frac{(\epsilon/2)t_k - M}{Ct_k} \] The above holds for all $t$ that satisfy (\ref{eq:t-cond2}). Now assume that $k$ is large enough to ensure that $(\epsilon/2)t_k - M \geq (\epsilon/4)t_k$ (this is true for sufficiently large $k$ because $\lim_{k\rightarrow\infty} t_k = \infty$. Thus, for sufficiently large $k$, and if (\ref{eq:t-cond2}) holds, the above bound becomes: \begin{equation} \label{eq:t5b} Pr[Q(t)>M] \geq \frac{\epsilon}{4C} \end{equation} From (\ref{eq:t-cond2}), we see that the number of slots $t\leq t_k$ for which (\ref{eq:t5b}) holds is at least $[\epsilon/(2C)]t_k$. Therefore: \[ \frac{1}{t_k+1} \sum_{\tau=0}^{t_k} Pr[Q(\tau)>M] \geq \frac{1}{t_k+1}\left(\frac{\epsilon}{4C}\right)\left(\frac{t_k\epsilon}{2C}\right) \] It follows that: \[ \limsup_{k\rightarrow\infty}\frac{1}{t_k+1}\sum_{\tau=0}^{t_k} Pr[Q(\tau)>M] \geq \frac{\epsilon^2}{8C^2} \] Thefore, the function $g(M)$ defined by (\ref{eq:gm}) satisfies for all $M>0$: \[ g(M) \geq \frac{\epsilon^2}{8C^2} \] It follows that: \[ \lim_{M\rightarrow\infty} g(M) \geq \frac{\epsilon^2}{8C^2} > 0 \] and hence $Q(t)$ is not steady state stable. \section*{Appendix E --- Additional Sample Path Results} We begin by reviewing some general results concerning expectations of limits of non-negative random processes. These can be viewed as probabilistic interpretations of Fatou's Lemma and the Lebesgue Dominated Convergence Theorem from measure theory (where an expectation can be viewed as an integral over an appropriate probability measure). We state these without proof (see, for example, \cite{billingsley}). Both lemmas below are stated for a general non-negative stochastic process $X(t)$ defined over $t \geq 0$ (where $t$ can either be an integer index or a value in the real number line). \begin{lem} \label{lem:fatou} (Fatou's Lemma) For any non-negative stochastic process $X(t)$ for $t\geq 0$, we have: \[ \liminf_{t\rightarrow\infty} \expect{X(t)} \geq \expect{\liminf_{t\rightarrow\infty} X(t)} \] where both both sides of the inequality can be potentially infinite. \end{lem} \begin{lem} \label{lem:lebesgue} (Lebesgue Dominated Convergence Theorem) Suppose that $X(t)$ is a non-negative stochastic process for $t \geq 0$, and that there is a non-negative random variable $Y$, defined on the same probability space as the process $X(t)$, such that $X(t) \leq Y$ (with probability $1$) for all $t$. Further assume that $\expect{Y} < \infty$. Then: (a) $\limsup_{t\rightarrow\infty} \expect{X(t)} \leq \expect{\limsup_{t\rightarrow\infty} X(t)} \leq \expect{Y}$ (b) In addition to the assumption that $X(t) \leq Y$ with probability 1, if $X(t)$ converges to a non-negative random variable $X$ with probability 1, then the limit of $\expect{X(t)}$ is well defined, and: \[ \lim_{t\rightarrow\infty} \expect{X(t)} = \expect{X} \] \end{lem} \subsection{Applications to Queue Sample Path Analysis} \begin{lem} Let $Q(t)$ be a general non-negative random process defined over $t \in \{0, 1, 2, \ldots\}$. Suppose that the following limit is well defined as a (possibly infinite) random variable $Q_{av}$ with probability 1: \begin{equation} \lim_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} Q(\tau) = Q_{av} \: \: \mbox{ with probability 1} \label{eq:sample-path-strong} \end{equation} Then: \begin{equation*} \expect{Q_{av}} \leq \liminf_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} \expect{Q(\tau)} \end{equation*} Therefore, if $Q(t)$ is strongly stable, it must be that $Q_{av} < \infty$ with probability 1. \end{lem} \begin{proof} Define $X(t) = \frac{1}{t}\sum_{\tau=0}^{t-1}Q(\tau)$, and note that $\liminf_{t\rightarrow\infty} X(t) = Q_{av}$ with probability 1. The result then follows as an immediate consequence of Lemma \ref{lem:fatou}. \end{proof} Using standard Markov chain theory and renewal theory, it can be shown that the limit in (\ref{eq:sample-path-strong}) exists as a (possibly infinite) random variable $Q_{av}$ with probability 1 whenever $Q(t)$ evolves according to a Markov chain such that, with probability 1, there is at least one state that is visited infinitely often with finite mean recurrence times (not necessarily the same state on each sample path realization). This holds even if the Markov chain is not irreducible, not aperiodic, and/or has an uncountably infinite state space. The converse statement does not hold: Note that if the limit (\ref{eq:sample-path-strong}) holds with $Q_{av} < \infty$ with probability 1, this does \emph{not} always imply that $Q(t)$ is strongly stable. The same counter-example from Subsection \ref{subsection:counterexamples-rate-mean} can be used to illustrate this. \begin{lem} Let $Q(t)$ be a general non-negative random process defined over $t \in \{0, 1, 2, \ldots\}$. Suppose that for all $M>0$, the following limit exists as a random variable $h(M)$ with probability $1$: \begin{equation} \label{eq:sample-path-steady-state} \lim_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} 1\{Q(\tau)> M\} = h(M) \: \: \mbox{ with probability 1} \end{equation} where $1\{Q(\tau)>M\}$ is an indicator function that is $1$ whenever $Q(\tau)>M$, and zero otherwise. Then the following limit for $g(M)$ is well defined: \[ \lim_{t\rightarrow\infty} \frac{1}{t}\sum_{\tau=0}^{t-1} Pr[Q(\tau) > M] = g(M) \] Furthermore: \[ g(M) = \expect{h(M)} \] It follows that if $Q(t)$ is steady state stable (so that $g(M) \rightarrow 0$ as $M \rightarrow \infty$), then: \[ \lim_{t\rightarrow\infty} h(M) = 0 \: \: \mbox{ with probability 1} \] \end{lem} \begin{proof} The result can be shown by application of Lemma \ref{lem:lebesgue}, using $X(t) = \frac{1}{t}\sum_{\tau=0}^{t-1} 1\{Q(\tau)>M\}$ and $Y=1$. \end{proof} By basic renewal theory and Markov chain theory, it can be shown that the limit in (\ref{eq:sample-path-steady-state}) holds whenever $Q(t)$ evolves according to a Markov chain (possibly non-irreducible, non-aperiodic) and such that either the event $\{Q(t)>M\}$ or its complement $\{Q(t) \leq M\}$ can be written as the union of a finite number of states. \section*{Appendix F --- Proof of Lemma \ref{lem:drift2}} \begin{proof} (Lemma \ref{lem:drift2}) By definition of a $C$-approximate decision, at every slot $\tau \in \{t, \ldots, t+T-1\}$ we have: \begin{eqnarray*} V\expect{f(\hat{\bv{x}}(\alpha(\tau), \omega(\tau)))|\bv{\Theta}(t)} \\ + \sum_{l=1}^L\expect{Z_l(\tau)g_l(\hat{\bv{x}}(\alpha(\tau), \omega(\tau))) |\bv{\Theta}(t)}\\ + \sum_{k=1}^K\expect{Q_k(\tau)a_k(\tau)|\bv{\Theta}(t)} \\ + \sum_{k=1}^K\expect{Q_k(\tau)[\hat{y}_k(\alpha(\tau), \omega(\tau)) - \hat{b}_k(\alpha(\tau), \omega(\tau))]|\bv{\Theta}(t)} \\ \leq C + V\expect{f(\hat{\bv{x}}(\alpha^*(\tau), \omega(\tau)))|\bv{\Theta}(t)} \\ + \sum_{l=1}^L\expect{Z_l(\tau)g_l(\hat{\bv{x}}(\alpha^*(\tau), \omega(\tau))) |\bv{\Theta}(t)}\\ + \sum_{k=1}^K\expect{Q_k(\tau)a_k(\tau)|\bv{\Theta}(t)} \\ + \sum_{k=1}^K\expect{Q_k(\tau)[\hat{y}_k(\alpha^*(\tau), \omega(\tau)) - \hat{b}_k(\alpha^*(\tau), \omega(\tau))]|\bv{\Theta}(t)} \end{eqnarray*} where $\alpha^*(\tau)$ is any other (possibly randomized) decision in $\script{A}_{\omega(\tau)}$. However, we have for all $\tau \in \{t, \ldots, t+T-1\}$: \begin{eqnarray*} |Z_l(\tau) - Z_l(t)| &\leq& \sum_{v=t}^{\tau-1} |g_l(\bv{x}(v))| \\ |Q_k(\tau) - Q_k(t)| &\leq& \sum_{v=t}^{\tau-1} [y_k(v) + a_k(v) + b_k(v)] \end{eqnarray*} Thus: \begin{eqnarray*} V\expect{f(\hat{\bv{x}}(\alpha(\tau), \omega(\tau)))|\bv{\Theta}(t)} \\ + \sum_{l=1}^L\expect{Z_l(t)g_l(\hat{\bv{x}}(\alpha(\tau), \omega(\tau))) |\bv{\Theta}(t)}\\ + \sum_{k=1}^K\expect{Q_k(t)a_k(\tau)|\bv{\Theta}(t)} \\ + \sum_{k=1}^K\expect{Q_k(t)[\hat{y}_k(\alpha(\tau), \omega(\tau)) - \hat{b}_k(\alpha(\tau), \omega(\tau))]|\bv{\Theta}(t)} \\ \leq C + (\tau-t)2\expect{\hat{D} |\bv{\Theta}(t)} \\ + V\expect{f(\hat{\bv{x}}(\alpha^*(\tau), \omega(\tau)))|\bv{\Theta}(t)} \\ + \sum_{l=1}^L\expect{Z_l(t)g_l(\hat{\bv{x}}(\alpha^*(\tau), \omega(\tau))) |\bv{\Theta}(t)}\\ + \sum_{k=1}^K\expect{Q_k(t)a_k(\tau)|\bv{\Theta}(t)} \\ + \sum_{k=1}^K\expect{Q_k(t)[\hat{y}_k(\alpha^*(\tau), \omega(\tau)) - \hat{b}_k(\alpha^*(\tau), \omega(\tau))]|\bv{\Theta}(t)} \end{eqnarray*} where $2\hat{D}$ is a random variable, with $\expect{\hat{D}} \leq D$, where $D$ is related (via Cauchy-Schwartz) to the worst case second moments of $g_l(\bv{x}(t))$, $a_k(t)$, $b_k(t)$, $y_k(t)$. A more detailed description of $D$ is given at the end of this subsection. Summing over $\tau \in \{t, \ldots, t+T-1\}$ yields: \begin{eqnarray*} V\expect{\sum_{\tau=0}^{t+T-1}f(\hat{\bv{x}}(\alpha(\tau), \omega(\tau)))|\bv{\Theta}(t)} \\ + \sum_{l=1}^L\sum_{\tau=0}^{t+T-1}\expect{Z_l(t)g_l(\hat{\bv{x}}(\alpha(\tau), \omega(\tau))) |\bv{\Theta}(t)}\\ + \sum_{k=1}^K\sum_{\tau=0}^{t+T-1}\mathbb{E}\left\{Q_k(t)[\hat{y}_k(\alpha(\tau), \omega(\tau)) - \right. \\ \left. \hat{b}_k(\alpha(\tau), \omega(\tau))]|\bv{\Theta}(t) \right\} \\ \leq CT + T(T-1)\expect{\hat{D}|\bv{\Theta}(t)} \\ + V\sum_{\tau=0}^{t+T-1}\expect{f(\hat{\bv{x}}(\alpha^*(\tau), \omega(\tau)))|\bv{\Theta}(t)} \\ + \sum_{l=1}^L\sum_{\tau=0}^{t+T-1}\expect{Z_l(t)g_l(\hat{\bv{x}}(\alpha^*(\tau), \omega(\tau))) |\bv{\Theta}(t)}\\ + \sum_{k=1}^K\sum_{\tau=0}^{t+T-1}\mathbb{E}\left\{Q_k(t)[\hat{y}_k(\alpha^*(\tau), \omega(\tau)) - \right. \\\left.\hat{b}_k(\alpha^*(\tau), \omega(\tau))]|\bv{\Theta}(t)\right\} \end{eqnarray*} The above inequality is an upper bound for the right hand side of (\ref{eq:drift}), which proves the result of (\ref{eq:drift2}). \end{proof} For more details on $\hat{D}$ and $D$, we note that $2\hat{D}$ can be defined to be $0$ if $\tau = t$, and if $\tau > t$ it is defined: \begin{eqnarray*} 2\hat{D} \mbox{\raisebox{-.3ex}{$\overset{\vartriangle}{=}$}} \sum_{k=1}^K[\hat{y}_k(\alpha^*(\tau), \omega(\tau)) + \hat{b}_k(\alpha^*(\tau), \omega(\tau))]\times \\ \left[ \frac{1}{\tau -t}\sum_{v=t}^{\tau-1}[y_k(v)+a_k(v)+b_k(v)]\right] \\ + \sum_{k=1}^K[\hat{y}_k(\alpha(\tau), \omega(\tau)) + \hat{b}_k(\alpha(\tau), \omega(\tau))]\times \\ \left[ \frac{1}{\tau-t}\sum_{v=t}^{\tau-1}[y_k(v)+a_k(v)+b_k(v)]\right] \\ + \sum_{l=1}^L|g_l(\hat{\bv{x}}(\alpha^*(\tau), \omega(\tau)))|\frac{1}{\tau-t}\sum_{v=t}^{\tau-1}|g_l(\bv{x}(v))| \\ + \sum_{l=1}^L|g_l(\hat{\bv{x}}(\alpha(\tau), \omega(\tau)))|\frac{1}{\tau-t}\sum_{v=t}^{\tau-1}|g_l(\bv{x}(v))| \end{eqnarray*} By the Cauchy-Schwartz inequality and Jensen's inequality, it follows that $2\expect{\hat{D}} \leq 2D$, where $D$ is a finite constant that satisfies: \begin{eqnarray*} D \geq \sum_{k=1}^K \expect{(\hat{y}_k(\alpha', \omega) + a_k(0) + \hat{b}_k(\alpha',\omega))^2} \\ + \sum_{l=1}^L\expect{g_l(\hat{\bv{x}}(\alpha'', \omega))^2} \end{eqnarray*} where the first term represents the worst case second moment of $\hat{y}_k(\cdot)+ a_k(0) + \hat{b}_k(\cdot)$ over any decision $\alpha'$ (where the expectation is with respect to the stationary distribution $\pi(\omega)$ for $\omega$), and the second term is the worst case second moment of $g_l(\hat{\bv{x}}(\cdot))$. \section*{Appendix G --- Proof of Lemma \ref{lem:driftyy}} \begin{proof} (Lemma \ref{lem:driftyy}) From (\ref{eq:multi-q-dynamics}), it can be shown that (see \cite{now}): \begin{eqnarray*} Q_k(t+T) \leq \max\left[Q_k(t) - \sum_{\tau=t}^{t+T-1} b_k(\tau), 0\right] \\ + \sum_{\tau=t}^{t+T-1}[a_k(\tau) + y_k(\tau)] \end{eqnarray*} Using the fact that for $Q\geq0$, $\mu\geq 0$, $a\geq 0$: \[ \frac{1}{2}(\max[Q-\mu,0] + a)^2 \leq \frac{Q^2 + \mu^2 + a^2}{2} + Q(a - \mu) \] yields: \begin{eqnarray*} \frac{Q_k(t+T)^2}{2} \leq \frac{Q_k(t)^2}{2} + \frac{1}{2}\left(\sum_{\tau=t}^{t+T-1} b_k(\tau)\right)^2 \\ + \frac{1}{2}\left( \sum_{\tau=t}^{t+T-1}[a_k(\tau) + y_k(\tau)] \right)^2 \\ + Q_k(t)\left( \sum_{\tau=t}^{t+T-1}[a_k(\tau) + y_k(\tau) - b_k(\tau)] \right) \end{eqnarray*} Similarly, from (\ref{eq:z-dynamics}) it can be shown that: \[ Z_l(t+T) \leq \max\left[Z_l(t) + \sum_{\tau=t}^{t+T-1}g_l(\bv{x}(\tau)) , \sum_{\tau=t}^{t+T-1}|g_l(\bv{x}(\tau)|\right] \] and so: \begin{eqnarray*} \frac{Z_l(t+T)^2}{2} \leq \frac{Z_l(t)^2}{2} + \left(\sum_{\tau=t}^{t+T-1}|g_l(\bv{x}(\tau))|\right)^2 \\ + Z_l(t)\sum_{\tau=t}^{t+T-1}g_l(\bv{x}(\tau)) \end{eqnarray*} Combining these, summing, and taking conditional expectations yields: \begin{eqnarray*} \Delta_T(\bv{\Theta}(t)) \leq T^2\expect{\hat{B}|\bv{\Theta}(t)} \\ + \sum_{k=1}^KQ_k(t)\sum_{\tau=t}^{t+T-1}\expect{a_k(\tau) + y_k(\tau) - b_k(\tau)|\bv{\Theta}(t)}\\ + \sum_{l=1}^LZ_l(t)\sum_{\tau=t}^{t+T-1} \expect{g_l(\bv{x}(\tau))|\bv{\Theta}(t)} \end{eqnarray*} where $\hat{B}$ is a random variable that satisfies: \begin{eqnarray*} \expect{\hat{B}|\bv{\Theta}(t)} = \frac{1}{2}\sum_{k=1}^K\expect{\left(\frac{1}{T}\sum_{\tau=t}^{t+T-1}b_k(\tau)\right)^2|\bv{\Theta}(t)} \\ + \frac{1}{2}\sum_{k=1}^K\expect{\left(\frac{1}{T}\sum_{\tau=t}^{t+T-1}[a_k(\tau)+y_k(\tau)]\right)^2|\bv{\Theta}(t)} \\ + \sum_{l=1}^L\expect{\left(\frac{1}{T}\sum_{\tau=t}^{t+T-1}|g_l(\bv{x}(\tau))|\right)^2|\bv{\Theta}(t)} \end{eqnarray*} By Jensen's inequality, we have that $\expect{\hat{B}} \leq B$, where $B$ is a constant that satisfies for all $t$: \begin{eqnarray*} B \geq \frac{1}{2}\sum_{k=1}^K\expect{b_k(t)^2} + \frac{1}{2}\sum_{k=1}^K\expect{(a_k(t)+y_k(t))^2} \\ + \sum_{l=1}^L\expect{g_l(\bv{x}(t))^2} \end{eqnarray*} Such a finite constant exists by the second moment boundedness assumptions. The result of (\ref{eq:drift}) follows by adding the following term to both sides: \[ V\sum_{\tau=t}^{t+T-1} \expect{f(\bv{x}(\tau))|\bv{\Theta}(t)} \] \end{proof} \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,731
\section{Introduction} \label{intro} This note is concerned with the numerical representation of preference relations induced by a special class of set-valued maps. Recall that a {\em preference (relation)} over the elements of a set $L$ is a reflexive and transitive binary relation on $L$. A preference is said to be {\em complete} if any two elements $x,y\in L$ are comparable in the sense that it is always possible to determine whether $x$ is preferred to $y$ or viceversa. Following the terminology of Dubra et al.~\cite{DubraMaccheroniOk2004}, a family $\cU$ of maps $u:L\to[-\infty,\infty]$ is a {\em multi-utility representation} of a preference $\succeq$ if for all $x,y\in L$ we have \[ x\succeq y \ \iff \ u(x)\geq u(y) \ \ \mbox{for every} \ u\in\cU. \] In words, a multi-utility representation provides a numerical representation for the given preference relation via a family of ``utility functionals''. In view of their greater tractability, multi-utility representations play a fundamental role in applications. A standard problem in this context is to find representations that are at the same time parsimonious (the family of representing functionals is indexed by a small set of parameters) and well behaved (the representing functionals satisfy nice regularity properties with respect to the structure of the underlying set). This is especially important for incomplete preferences, which cannot be represented by a unique functional. \medskip In the broad field of economics and finance, incomplete preferences arise naturally in the presence of multi-criteria decision making. We refer to Aumann~\cite{Aumann1962} and Bewley~\cite{Bewley2002} for two classical references and to Ok~\cite{Ok2002}, Dubra et al.~\cite{DubraMaccheroniOk2004}, Mandler~\cite{Mandler2005}, Eliaz and Ok~\cite{EliazOk2006}, Kaminski~\cite{Kaminski2007}, Evren~\cite{Evren2008,Evren2014}, Evren and Ok~\cite{EvrenOk2011}, Bosi and Herden~\cite{Bosiherden2012}, Galaabaatar and Karni~\cite{GalaabaatarKarni2013}, Nishimura and Ok~\cite{NishimuraOk2016}, Bosi et al.~\cite{BosiEstevanZuanon2018}, and Bevilacqua et al.~\cite{BevilacquaBosiKaucicZuanon2019} for an overview of contributions to the theory of incomplete preferences and their multi-utility representations in the last twenty years. \medskip The goal of this note is to establish numerical representations of preference relations induced by a special class of set-valued maps that have been the subject of intense research in the recent mathematical finance literature. To introduce the underlying economic problem, consider an economic agent who is confronted with the problem of ranking a number of different alternatives represented by the elements of a set $L$. The agent has specified a target set of acceptable or attractive alternatives $A\subseteq L$. We assume that, if an alternative is not acceptable, it can be made acceptable upon implementation of a suitable admissible action. We represent the results of admissible actions by the elements of a set $M\subseteq L$ and assume that a given alternative $x\in L$ can be transformed through a given $m\in M$ into the new alternative $x+m$. The objective of the agent is then to identify, for each alternative, all the admissible actions that can be implemented to move said alternative inside the target set by way of translations. This naturally leads to the set-valued map $R:L\rightrightarrows M$ defined by \[ R(x) := \{m\in M \,: \ x+m\in A\} = M\cap(A-x). \] The map $R$ can be seen as a generalization of the set-valued risk measures studied by Jouini et al.~\cite{JouiniMeddebTouzi2004}, Kulikov~\cite{Kulikov2008}, Hamel and Heyde~\cite{HamelHeyde2010}, Hamel et al.~\cite{HamelHeydeRudloff2011}, and Molchanov and Cascos~\cite{MolchanovCascos2016} in the context of markets with transaction costs; by Haier et al.~\cite{HaierMolchanovSchmutz2016} in the context of intragroup transfers; by Feinstein et al.~\cite{FeinsteinRudloffWeber2017}, Armenti et al.~\cite{ArmentiCrepeyDrapeauPapapantoleon2018}, and Ararat and Rudloff~\cite{AraratRudloff} in the context of systemic risk. We refer to these contributions for a discussion about the financial interpretation of set-valued risk measures in the respective fields of application and to Section~\ref{sect: applications} for some concrete examples in the context of multi-currency markets with transaction costs and systemic risk. \medskip The set-valued map $R$ defined above induces a preference relation on $L$ by setting \[ x\succeq_R y \ :\iff \ R(x)\supseteq R(y). \] According to this preference, the agent prefers $x$ to $y$ if every admissible action through which we can move $y$ into the target set will also allow us to transport $x$ there. In other terms, $x$ is preferred to $y$ if it is easier to make $x$ acceptable compared to $y$. The goal of this note is to establish numerical representations of the preference $\succeq_R$. Since this preference, as shown below, is not complete in general, we have to deal with multi-utility representations. In particular, we look for representations consisting of (semi)continuous utility functionals. We achieve this by establishing suitable (dual) representations of the set-valued map $R$. \medskip Our results provide a unifying perspective on the existing dual representations of set-valued risk measures and on the corresponding multi-utility representations, which, to be best of our knowledge, have never been explicitly investigated in the literature. We illustrate the advantages of such a unifying approach by discussing applications to multi-currency markets with transaction costs and systemic risk. In addition, we highlight where our strategy to establishing dual representations differs from the standard arguments used in the literature. The note is structured as follows. The necessary mathematical background is collected in Section~\ref{sect: math background}. The standing assumptions on the space of alternatives and the main properties of the set-valued map under investigation are presented in Section~\ref{sect: setting}. The main results on dual and multi-utility representations are established in Section~\ref{sect: multi utility representations} and are applied to a number of concrete situations in Section~\ref{sect: applications}. \section{Mathematical background} \label{sect: math background} In this section we collect the necessary mathematical background and fix the notation and terminology used throughout the paper. We refer to Rockafellar~\cite{Rockafellar1974} and Z\u{a}linescu~\cite{Zalinescu2002} for a thorough presentation of duality for topological vector spaces. Moreover, we refer to Aubin and Ekeland~\cite{AubinEkeland1984} for a variety of results on support functions and barrier cones. \medskip Let $L$ be a real locally convex Hausdorff topological vector space. The topological dual of $L$ is denoted by $L'$. Any linear subspace $M\subseteq L$ is canonically equipped with the relative topology inherited from $L$. The corresponding dual space is denoted by $M'$. For every set $A\subseteq L$ we denote by $\Interior(A)$ and $\Closure(A)$ the interior and the closure of $A$, respectively. We say that $A$ is convex if $\lambda A+(1-\lambda) A\subseteq A$ for every $\lambda\in[0,1]$ and that $A$ is a cone if $\lambda A\subseteq A$ for every $\lambda\in[0,\infty)$. The (lower) support function of $A$ is the map $\sigma_A:L'\to[-\infty,\infty]$ defined by \[ \sigma_A(\psi) := \inf_{x\in A}\psi(x). \] The effective domain of $\sigma_A$ is called the barrier cone of $A$ and is denoted by \[ \barr(A) := \{\,\psi\in L' \,: \ \sigma_A(\psi)>-\infty\}. \] It follows from the Hahn-Banach Theorem that, if $A$ is closed and convex, then it can be represented as the intersection of all the halfspaces containing it or equivalently \begin{equation} \label{eq: external representation} A = \bigcap_{\psi\in L'}\{x\in L \,: \ \psi(x)\geq\sigma_A(\psi)\} = \bigcap_{\psi\in\barr(A)}\{x\in L \,: \ \psi(x)\geq\sigma_A(\psi)\}. \end{equation} If $A$ is a cone, then $\barr(A)$ coincides with the polar or dual cone of $A$, i.e. \[ \barr(A) = A^+ := \{\psi\in L' \,: \ \psi(x)\geq0, \ \forall x\in A\}. \] If $A$ is a vector space, then $\barr(A)$ coincides with the annihilator of $A$, i.e. \[ \barr(A) = A^\bot := \{\psi\in L' \,: \ \psi(x)=0, \ \forall x\in A\}. \] Finally, if $A+K\subseteq A$ for some cone $K\subseteq L$, then $\barr(A)\subseteq K^+$. \section{The setting} \label{sect: setting} Throughout the remainder of the note, we assume that $L$ is a real locally convex Hausdorff topological vector space. We also fix a closed convex cone $K\subseteq L$ satisfying $K-K=L$ and consider the induced partial order defined by \[ x\succeq_Ky \ :\iff \ x-y\in K. \] The above partial order is meant to capture an ``objective'' preference relation shared by all agents. This is akin to the ``better for sure'' preference in Drapeau and Kupper~\cite{DrapeauKupper2013}. \begin{assumption} We stipulate the following assumptions on $A$ and $M$: \begin{enumerate} \item[(A1)] $A$ is closed, convex, and satisfies $A+K\subseteq A$. \item[(A2)] $M$ is a closed linear subspace of $L$ such that $M\cap K\neq\{0\}$. \item[(A3)] $R(x)\notin\{\emptyset,M\}$ for some $x\in L$. \end{enumerate} \end{assumption} \smallskip The next proposition collects a number of basic properties of the set-valued map $R$ and its associated preference $\succeq_R$. The properties of $R$ are aligned with those discussed in Hamel and Heyde~\cite{HamelHeyde2010} and Hamel et al.~\cite{HamelHeydeRudloff2011}. \begin{proposition} \label{general properties} \begin{enumerate}[(i)] \item $\succeq_R$ is monotone with respect to $K$, i.e.\ for all $x,y\in L$ \[ x\succeq_Ky \ \implies \ x\succeq_Ry. \] \item $\succeq_R$ is convex, i.e.\ for all $x,y\in L$ and $\lambda\in[0,1]$ \[ x\succeq_Ry \ \implies \ \lambda x+(1-\lambda)y\succeq_Ry. \] \item $R(x)+K\cap M\subseteq R(x)$ for every $x\in L$. \item $R(x+m)=R(x)-m$ for all $x\in L$ and $m\in M$. \item $R(\lambda x+(1-\lambda)y)\supseteq\lambda R(x)+(1-\lambda)R(y)$ for all $x,y\in L$ and $\lambda\in[0,1]$. \item $R(x)$ is convex and closed for every $x\in L$. \item $R(x)\neq M$ for every $x\in L$. \end{enumerate} \end{proposition} \begin{proof} To establish (i), assume that $x\succeq_Ky$ for $x,y\in L$. For every $m\in R(y)$ we have \[ x+m=y+m+x-y\in A+K\subseteq A. \] This shows that $m\in R(x)$ as well, so that $x\succeq_Ry$. To establish (ii), take $\lambda\in[0,1]$ and assume that $x\succeq_Ry$. For every $m\in R(y)$ we have $y+m\in A$ and, hence, $x+m\in A$. This yields \[ \lambda x+(1-\lambda)y+m=\lambda(x+m)+(1-\lambda)(y+m)\in \lambda A+(1-\lambda)A \subseteq A, \] showing that $m\in R(\lambda x+(1-\lambda)y)$. In sum, $\lambda x+(1-\lambda)y\succeq_R y$. To see that properties (iii) to (vi) hold, it suffices to recall that $R(x)=M\cap(A-x)$ for every $x\in L$. Finally, to establish (vii), assume that $R(x)=M$ for some $x\in L$. Take any $y\in L$ and assume that $R(y)$ is nonempty so that $y+m\in A$ for some $m\in M$. For all $n\in M$ and $\lambda\in(0,1]$ we have \[ \lambda\bigg(x+\frac{1}{\lambda}(n-m)\bigg)+(1-\lambda)(y+m) \in \lambda A+(1-\lambda)A \subseteq A \] by convexity. Hence, letting $\lambda\to0$, we obtain $y+n\in A$ by closedness. Since $n$ was arbitrary, we infer that $R(y)=M$. This contradicts assumption (A3), showing that $R(x)\neq M$ must hold for every $x\in L$. \end{proof} \smallskip \begin{remark} (i) If $M$ is spanned by a single element, then $\succeq_R$ is complete. Indeed, in this case, we can always assume that $M$ is spanned by a nonzero element $m\in M\cap K$ by our standing assumption. Then, for every $x\in L$ such that $R(x)\neq\emptyset$ we see that \[ R(x) = \{\lambda m \,: \ \lambda\in[\lambda_x,\infty)\} \] for a suitable $\lambda_x\in\R$. This shows that $\succeq_R$ is complete. \smallskip (ii) In general, the preference $\succeq_R$ is not complete when $M$ is spanned by more than one element. For instance, let $L=\R^3$ and assume that $K=A=\R^3_+$ and $M=\R^2\times\{0\}$. For $x=0$ and $y=(1,-1,0)$ we respectively have \[ R(x)=\{m\in M \,: \ m_1\geq0, \ m_2\geq0\} \ \ \ \mbox{and} \ \ \ R(y)=\{m\in M \,: \ m_1\geq-1, \ m_2\geq1\}. \] Clearly, neither $x\succeq_Ry$ nor $y\succeq_Rx$ holds, showing that $\succeq_R$ is not complete. \smallskip (iii) Sometimes the preference $\succeq_R$ is complete even if $M$ is spanned by more than one element. For instance, let $L=\R^3$ and assume that $K=A=\R^2_+\times\R$ and $M=\{0\}\times\R^2$. For every $x\in L$ such that $R(x)\neq\emptyset$ we have \[ R(x) = \{m\in M \,: \ m_2\geq-x_2\}. \] This shows that $\succeq_R$ is complete. \end{remark} \section{Multi-utility representations} \label{sect: multi utility representations} In this section we establish a variety of multi-utility representations of the preference induced by $R$, which are derived from suitable representations of the sets $R(x)$. As highlighted below, both representations have a strong link with (scalar) risk measures and their dual representations. We refer to the appendix for the necessary mathematical background and notation. \medskip The first multi-utility representation is based on the following scalarizations of $R$. Here, we set \[ K_M^+ := \{\pi\in M' \,: \ \pi(m)\geq0, \ \forall m\in K\cap M\}. \] \smallskip \begin{definition} For every $\pi\in K_M^+$ we define a map $\rho_\pi:L\to[-\infty,\infty]$ by setting \[ \rho_\pi(x) := \inf\{\pi(m) \,: \ m\in M, \ x+m\in A\}. \] Moreover, we define a map $u_\pi:L\to[-\infty,\infty]$ by setting \[ u_\pi(x) := -\rho_\pi(x). \] \end{definition} \smallskip The functionals $\rho_\pi$ are examples of the risk measures introduced in F\"{o}llmer and Schied~\cite{FoellmerSchied2002} and generalized in Frittelli and Scandolo~\cite{FrittelliScandolo2006}. We refer to Farkas et al.~\cite{FarkasKochMunari2014,FarkasKochMunari2015} for a thorough investigation of such functionals at our level of generality. The next proposition features some of their standard properties, which follow immediately from Proposition~\ref{general properties}. Since the announced multi-utility representation will be expressed in terms of the negatives of the functionals $\rho_\pi$, the proposition is stated in terms of the utility functionals $u_\pi$. \begin{proposition} For every $\pi\in K_M^+$ the functional $u_\pi$ satisfies the following properties: \begin{enumerate}[(i)] \item $u_\pi$ is translative along $M$, i.e.\ for all $x\in L$ and $m\in M$ \[ u_\pi(x+m) = u_\pi(x)+\pi(m). \] \item $u_\pi$ is nondecreasing with respect to $\succeq_K$, i.e.\ for all $x,y\in L$ \[ x\succeq_K y \ \implies \ u_\pi(x)\geq u_\pi(y). \] \item $u_\pi$ is concave, i.e.\ for all $x,y\in L$ and $\lambda\in[0,1]$ \[ u_\pi(\lambda x+(1-\lambda)y) \geq \lambda u_\pi(x)+(1-\lambda)u_\pi(y). \] \end{enumerate} \end{proposition} \smallskip \begin{remark} Note that, unless $M$ is spanned by one element, the closedness of the set $A$ is not sufficient to ensure that the functionals $\rho_\pi$ are lower semicontinuous; see Example 1 in Farkas et al.~\cite{FarkasKochMunari2015}. We refer to Hamel et al.~\cite{HamelHeydeLoehneRudloffSchrage2015} for a discussion on general sufficient conditions ensuring the lower semicontinuity of scalarizations of set-valued maps and to Farkas et al.~\cite{FarkasKochMunari2015} and Baes et al.~\cite{BaesKochMunari2020} for a variety of sufficient conditions in a risk measure setting. \end{remark} \smallskip The first multi-utility representation of the preference induced by $R$ rests on the intimate link between the risk measures $\rho_\pi$ and the support functions corresponding to $R$. \begin{lemma} \label{theo: dual representation 1} For every $x\in L$ the set $R(x)$ can be represented as \[ R(x) = \bigcap_{\pi\in K_M^+\setminus\{0\}}\{m\in M \,: \ \pi(m)\geq\rho_\pi(x)\}. \] \end{lemma} \begin{proof} The result is clear if $R(x)=\emptyset$. Otherwise, recall that $R(x)$ is closed and convex by Proposition~\ref{general properties} and observe that $\rho_\pi(x)=\sigma_{R(x)}(\pi)$ for every $\pi\in M'$. We can apply the dual representation~\eqref{eq: external representation} in the context of the space $M$ to obtain \[ R(x) = \bigcap_{\pi\in\barr(R(x))\setminus\{0\}}\{m\in M \,: \ \pi(m)\geq\rho_\pi(x)\}. \] As $R(x)+K\cap M\subseteq R(x)$ again by Proposition~\ref{general properties}, we conclude by noting that the barrier cone of $R(x)$ must be contained in $K_M^+$. \end{proof} \smallskip \begin{theorem} \label{cor: first multi utility representation} The preference $\succeq_R$ can be represented by the multi-utility family \[ \cU=\{u_\pi \,: \ \pi\in K_M^+\setminus\{0\}\}. \] \end{theorem} \begin{proof} We rely on Lemma~\ref{theo: dual representation 1}. Take any $x,y\in L$. If $x\succeq_Ry$, then $R(x)\supseteq R(y)$ and \[ \rho_\pi(x) = \sigma_{R(x)}(\pi) \leq \sigma_{R(y)}(\pi) = \rho_\pi(y) \] for every $\pi\in K_M^+\setminus\{0\}$. Conversely, if $\rho_\pi(x)\leq\rho_\pi(y)$ for every $\pi\in K_M^+\setminus\{0\}$, then for each $m\in R(y)$ we have $\pi(m) \geq \rho_\pi(y) \geq \rho_\pi(x)$ for every $\pi\in K_M^+\setminus\{0\}$, so that $m\in R(x)$. This yields $x\succeq_R y$ and concludes the proof. \end{proof} \smallskip \begin{remark} The simple representation in Lemma~\ref{theo: dual representation 1} shows that the set-valued map $R$ is completely characterized by the family of functionals $\rho_\pi$. In the context of risk measures, one could say that a set-valued risk measure is completely characterized by the corresponding family of scalar risk measures. This corresponds to the ``setification'' formula in Section 4.2 in Hamel et al.~\cite{HamelHeydeLoehneRudloffSchrage2015}. \end{remark} \smallskip We aim to improve the above representation in a twofold way. First, we want to find a multi-utility representation consisting of a smaller number of representing functionals. This is important to ensure a more parsimonious, hence tractable, representation. Second, we want to establish a multi-utility representation consisting of (semi)continuous representing functionals. This is important in applications, e.g.\ in optimization problems where the preference appears in the optimization domain. \medskip The second multi-utility representation will be expressed in terms of the following utility functionals. Here, for any functional $\pi\in M'$ we denote by $\ext(\pi)$ the set of all linear continuous extensions of $\pi$ to the whole space $L$, i.e. \[ \ext(\pi) := \{\psi\in L' \,: \ \psi(m)=\pi(m), \ \forall m\in M\}. \] \smallskip \begin{definition} For every $\pi\in K_M^+$ we define a map $\rho_\pi^\ast:L\to[-\infty,\infty]$ by setting \[ \rho_\pi^\ast(x) = \sup_{\psi\in \ext(\pi)}\{\sigma_A(\psi)-\psi(x)\} = \sup_{\psi\in\ext(\pi)\cap\barr(A)}\{\sigma_A(\psi)-\psi(x)\}. \] Moreover, we define a map $u_\pi^\ast:L\to[-\infty,\infty]$ by setting \[ u_\pi^\ast(x) = -\rho_\pi^\ast(x). \] (If $A$ is a cone, then $\sigma_A=0$ on $\barr(A)$ and the above maps simplify accordingly). \end{definition} \smallskip The functionals $\rho_\pi^\ast$ are inspired by the dual representation of the risk measures $\rho_\pi$, see e.g.\ Frittelli and Scandolo~\cite{FrittelliScandolo2006} or Farkas et al.~\cite{FarkasKochMunari2015}. The precise link is shown in Proposition~\ref{prop: lsc hull} below. For the time being, we are interested in highlighting some properties of the functionals $\rho_\pi^\ast$, or equivalently $u_\pi^\ast$, and proceeding to our desired multi-utility representation. \begin{proposition} For every $\pi\in K_M^+$ the functional $u_\pi^\ast$ satisfies the following properties: \begin{enumerate}[(i)] \item $u_\pi^\ast$ is translative along $M$, i.e.\ for all $x\in L$ and $m\in M$ \[ u^\ast_\pi(x+m) = u^\ast_\pi(x)+\pi(m). \] \item $u^\ast_\pi$ is nondecreasing with respect to $\succeq_K$, i.e.\ for all $x,y\in L$ \[ x\succeq_K y \ \implies \ u^\ast_\pi(x)\geq u^\ast_\pi(y). \] \item $u^\ast_\pi$ is concave, i.e.\ for all $x,y\in L$ and $\lambda\in[0,1]$ \[ u^\ast_\pi(\lambda x+(1-\lambda)y) \geq \lambda u^\ast_\pi(x)+(1-\lambda)u^\ast_\pi(y). \] \item $u^\ast_\pi$ is upper semicontinuous, i.e.\ for every net $(x_\alpha)\subseteq L$ and every $x\in L$ \[ x_\alpha\to x \ \implies \ \limsup u^\ast_\pi(x_\alpha)\geq u^\ast_\pi(x). \] \end{enumerate} \end{proposition} \begin{proof} Translativity follows from the definition of $\rho^\ast_\pi$. Being a supremum of affine maps, it is clear that $\rho^\ast_\pi$ is convex and lower semicontinuous. To show monotonicity, it suffices to observe that $\barr(A)\subseteq K^+$ by (A1) and therefore \[ \rho_\pi^\ast(x) = \sup_{\psi\in\ext(\pi)\cap K^+}\{\sigma_A(\psi)-\psi(x)\} \leq \sup_{\psi\in\ext(\pi)\cap K^+}\{\sigma_A(\psi)-\psi(y)\} = \rho_\pi^\ast(y) \] for all $x,y\in L$ with $x\succeq_Ky$, where we used that $\psi(x-y)\geq0$ for every $\psi\in K^+$. \end{proof} \smallskip To streamline the proof of the announced multi-utility representation, we start with the following lemma. We denote by $\ker(\pi)$ the kernel of $\pi\in M'$, i.e. \[ \ker(\pi) := \{m\in M \,: \ \pi(m)=0\}. \] In the sequel, we will repeatedly use the fact that $\ker(\pi)$ has codimension $1$ in $M$ (provided $\pi$ is nonzero). \begin{lemma} \label{lem: properties augmented set} The set $A$ can be represented as \begin{equation} \label{eq: properties augmented set} A = \bigcap_{\pi\in K_M^+\setminus\{0\}}\Closure(A+\ker(\pi)). \end{equation} Moreover, for every $\pi\in K_M^+$ we have $\barr(\Closure(A+\ker(\pi)))=\barr(A)\cap\ker(\pi)^\bot$ and \[ \sigma_{\Closure(A+\ker(\pi))}(\psi)= \begin{cases} \sigma_A(\psi) & \mbox{if $\psi\in\ker(\pi)^\bot$},\\ -\infty & \mbox{otherwise}. \end{cases} \] \end{lemma} \begin{proof} We only prove the inclusion ``$\supseteq$'' in~\eqref{eq: properties augmented set} because the other assertions are clear. Assume that $x\in\Closure(A+\ker(\pi))$ for every nonzero $\pi\in K_M^+$ and take any $\psi\in\barr(A)$. In light of~\eqref{eq: external representation}, to conclude the proof we have to show that $\psi(x)\geq\sigma_A(\psi)$. To this effect, let $\pi_\psi$ be the restriction of $\psi$ to the space $M$. Since $\barr(A)\subseteq K^+$, it follows that $\pi_\psi\in K_M^+$. Moreover, note that $\psi\in\ker(\pi_\psi)^\bot$. As a result, we have \[ \psi \in \barr(A)\cap\ker(\pi_\psi)^\bot = \barr(A+\ker(\pi_\psi)) = \barr(\Closure(A+\ker(\pi_\psi))). \] Since $x\in\Closure(A+\ker(\pi_\psi))$ by our assumption, we can use~\eqref{eq: external representation} again to get \[ \psi(x) \geq \sigma_{\Closure(A+\ker(\pi_\psi))}(\psi) = \sigma_{A+\ker(\pi_\psi)}(\psi) = \sigma_A(\psi), \] where the last equality holds because $\psi\in\ker(\pi_\psi)^\bot$. This concludes the proof. \end{proof} \smallskip The next lemma records a representation of the map $R$ that will immediately yield our desired multi-utility representation with (upper) semicontinuous functionals. \begin{lemma}[{\bf Dual representation of $R$}] \label{theo: dual representation 2} For every $x\in L$ the set $R(x)$ can be represented as \begin{align*} R(x) &= \bigcap_{\pi\in K_M^+\setminus\{0\}}\{m\in M \,: \ \pi(m)\geq\rho_\pi^\ast(x)\} \\ &= \bigcap_{\pi\in K_M^+\setminus\{0\}}\bigcap_{\psi\in \ext(\pi)}\{m\in M \,: \ \pi(m)\geq\sigma_A(\psi)-\psi(x)\}. \end{align*} (If $A$ is a cone, then $\sigma_A=0$ on $\barr(A)$ and the representation simplifies accordingly). \end{lemma} \begin{proof} Fix $x\in L$. It follows from the representation in~\eqref{eq: external representation} and Lemma~\ref{lem: properties augmented set} that \begin{equation} \label{eq: dual set valued A closed} R(x) = \bigcap_{\pi\in K_M^+\setminus\{0\}}\bigcap_{\psi\in\ker(\pi)^\bot}\{m\in M \,: \ \psi(m)\geq\sigma_A(\psi)-\psi(x)\}. \end{equation} To establish the desired representation of $R(x)$ it then suffices to show that the set $\ker(\pi)^\bot$ in the right-hand side of \eqref{eq: dual set valued A closed} can be replaced by $\ext(\pi)$. To this effect, let $m\in M$ satisfy $\pi(m)\geq\sigma_A(\psi)-\psi(x)$ for all nonzero $\pi\in K_M^+$ and $\psi\in\ext(\pi)$. Moreover, take an arbitrary nonzero $\pi\in K_M^+$ and an arbitrary $\psi\in\ker(\pi)^\bot$. To conclude the proof, we have to show that \begin{equation} \label{eq: dual set valued A closed 3} \psi(m) \geq \sigma_A(\psi)-\psi(x). \end{equation} This is clear if $\psi\notin\barr(A)$ or $\psi\in\ext(\pi)$. Hence, assume that $\psi\in\barr(A)\setminus\ext(\pi)$. Note that, since $\pi$ is nonzero and $K-K=L$, we find $n\in K_M$ such that $\pi(n)>0$. Since $\barr(A)\subseteq K^+$, two situations are possible. On the one hand, if $\psi(n)>0$, then $\psi$ belongs to $\ext(\pi)$ up to a strictly-positive multiple and therefore~\eqref{eq: dual set valued A closed 3} holds. On the other hand, if $\psi(n)=0$, then we must have $\psi\in M^\bot$. To deal with this case, note first that we always find a nonzero $\pi^\ast\in K_M^+$ satisfying $\ext(\pi^\ast)\cap\barr(A)\neq\emptyset$, for otherwise every functional in $\barr(A)\cap\ker(\pi^\ast)^\bot$ would annihilate the entire $M$ and it would follow from~\eqref{eq: external representation} and~\eqref{eq: dual set valued A closed} that $R(y)=M$ for every $y\in A$, which is against Proposition~\ref{general properties}. Now, take $\varphi\in\ext(\pi^\ast)\cap\barr(A)$ and set $\varphi_k=\varphi+k\psi\in\ext(\pi^\ast)$ for each $k\in\N$. It follows that \begin{align*} \pi^\ast(m) &= \sup_{k\in\N}\varphi_k(m) \geq \sup_{k\in\N}\{\sigma_A(\varphi_k)-\varphi_k(x)\} \\ &\geq \sigma_A(\varphi)-\varphi(x)+\sup_{k\in\N}\{k(\sigma_A(\psi)-\psi(x))\}. \end{align*} This implies that $\psi(m)=0\geq\sigma_A(\psi)-\psi(x)$ must hold, establishing~\eqref{eq: dual set valued A closed 3}. \end{proof} \smallskip \begin{theorem} \label{cor: second multi utility representation} The preference $\succeq_R$ can be represented by the multi-utility family \[ \cU^\ast=\{u^\ast_\pi \,: \ \pi\in K_M^+\setminus\{0\}, \ \ext(\pi)\cap\barr(A)\neq\emptyset\}. \] \end{theorem} \begin{proof} Note that $\rho^\ast_\pi(x)=-\infty$ for every $x\in L$ whenever $\ext(\pi)\cap\barr(A)=\emptyset$ for some $\pi\in K_M^+$. Hence, the desired assertion follows immediately from Lemma~\ref{theo: dual representation 2}; see also the proof of Theorem~\ref{cor: first multi utility representation}. \end{proof} \smallskip \begin{remark} The statement of Lemma~\ref{theo: dual representation 2} provides a unifying formulation for the dual representations of set-valued risk measures in the literature. This is further illustrated in Section~\ref{sect: applications}. The strategy used in the proof is different from the ones adopted in the literature, which are often based on results from set-valued duality, thereby offering a complementary perspective on the existing proofs; see also Remark~\ref{rem: strategy}. \end{remark} \smallskip The next proposition shows the link between the two multi-utility representations we have established. In a sense made precise below, the representation $\cU^\ast$ can be seen as the regularization of $\cU$ by means of (upper) semicontinuous hulls. Before we show this, it is useful to single out the following dual representation of the augmented acceptance set, which should be compared with Theorem 1 in Farkas et al.~\cite{FarkasKochMunari2015}. \begin{lemma} \label{lem: representation augmented} For every $\pi\in K_M^+\setminus\{0\}$ such that $\ext(\pi)\cap\barr(A)\neq\emptyset$ we have \[ \Closure(A+\ker(\pi)) = \bigcap_{\psi\in\ext(\pi)}\{x\in L \,: \ \psi(x)\geq\sigma_A(\psi)\}. \] \end{lemma} \begin{proof} In view of~\eqref{eq: external representation} and Lemma~\ref{lem: properties augmented set}, the assertion is equivalent to \[ \bigcap_{\psi\in\ker(\pi)^\bot}\{x\in L \,: \ \psi(x)\geq\sigma_A(\psi)\} = \bigcap_{\psi\in\ext(\pi)}\{x\in L \,: \ \psi(x)\geq\sigma_A(\psi)\}. \] We only need to show the inclusion ``$\supseteq$''. To this end, we mimic the argument in the proof of Lemma~\ref{theo: dual representation 2}. Let $x\in L$ belong to the right-hand side above and take $\psi\in\ker(\pi)^\bot$. We have to show that \begin{equation} \label{eq: representation augmented} \psi(x)\geq\sigma_A(\psi). \end{equation} This is clear if $\psi\notin\barr(A)$ or $\psi\in\ext(\pi)$. Hence, assume that $\psi\in\barr(A)\setminus\ext(\pi)$. Note that, since $\pi$ is nonzero and $K-K=L$, we find $n\in K_M$ such that $\pi(n)>0$. Since $\barr(A)\subseteq K^+$, two situations are possible. On the one hand, if $\psi(n)>0$, then $\psi$ belongs to $\ext(\pi)$ up to a strictly-positive multiple and therefore~\eqref{eq: representation augmented} holds. On the other hand, if $\psi(n)=0$, then we must have $\psi\in M^\bot$. In this case, take any functional $\varphi\in\ext(\pi)\cap\barr(A)$ and set $\varphi_k=k\psi+\varphi\in\ext(\pi)$ for every $k\in\N$. Then, \[ \psi(x)+\frac{1}{k}\varphi(x) = \frac{1}{k}\varphi_k(x) \geq \frac{1}{k}\sigma_A(\varphi_k) \geq \sigma_A(\psi)+\frac{1}{k}\sigma_A(\varphi) \] for every $k\in\N$. Letting $k\to\infty$ yields~\eqref{eq: representation augmented} and concludes the proof. \end{proof} \smallskip For a given map $f:L\to[-\infty,\infty]$ we denote by $\lsc(f)$ the largest lower semicontinuous map dominated by $f$ and, similarly, by $\usc(f)$ the smallest upper semicontinuous map dominating $f$. \begin{proposition} \label{prop: lsc hull} For every $\pi\in K_M^+\setminus\{0\}$ such that $\ext(\pi)\cap\barr(A)\neq\emptyset$ the following statements hold: \begin{enumerate}[(i)] \item $\rho^\ast_\pi=\lsc(\rho_\pi)$. \item $u^\ast_\pi=\usc(u_\pi)$. \end{enumerate} \end{proposition} \begin{proof} Fix a nonzero $\pi\in K_M^+$ such that $\ext(\pi)\cap\barr(A)\neq\emptyset$. Clearly, we only need to show (i). To this effect, recall that $\rho^\ast_\pi$ is lower semicontinuous and note that it is dominated by $\rho_\pi$. Indeed, for every $x\in L$ and for every $m\in M$ such that $x+m\in A$ \[ \sup_{\psi\in\ext(\pi)}\{\sigma_A(\psi)-\psi(x)\} \leq \sup_{\psi\in\ext(\pi)}\{\psi(x+m)-\psi(x)\} = \pi(m), \] showing that $\rho^\ast_\pi(x)\leq\rho_\pi(x)$. Now, take a lower semicontinuous map $f:L\to[-\infty,\infty]$ such that $f\leq\rho_\pi$. We claim that $f\leq\rho_\pi^\ast$ as well. To show this, suppose to the contrary that $f(x)>\rho_\pi^\ast(x)$ for some $x\in L$. Note that \[ \rho_\pi^\ast(x) = \inf\{\lambda\in\R \,: \ x+\lambda m\in\Closure(A+\ker(\pi))\} \] by Lemma~\ref{lem: representation augmented}, where $m\in M$ is any element satisfying $\pi(m)=1$ (which exists because $\pi$ is nonzero). As a result, we must have $f(x)>\lambda$ for some $\lambda\in\R$ such that $x+\lambda m\in\Closure(A+\ker(\pi))$. Hence, there exist two nets $(x_\alpha)\subseteq A$ and $(m_\alpha)\subseteq\ker(\pi)$ such that $x_\alpha+m_\alpha\to x+\lambda m$. Since $\{f>\lambda\}$ is open by lower semicontinuity, it eventually follows from the translativity of $\rho_\pi$ that \[ \lambda < f(x_\alpha+m_\alpha-\lambda m) \leq \rho_\pi(x_\alpha+m_\alpha-\lambda m) = \rho_\pi(x_\alpha)+\lambda \leq \lambda. \] Since this is impossible, we infer that $f\leq\rho_\pi^\ast$ must hold, concluding the proof. \end{proof} \smallskip \begin{remark} \label{rem: strategy} (i) The preceding proposition shows that the dual representation in Lemma~\ref{theo: dual representation 2} and, hence, the multi-utility representation in Theorem~\ref{cor: second multi utility representation} can be equivalently stated in terms of the semicontinuous hulls of the functionals $\rho_\pi$ and $u_\pi$, respectively. This should be compared with the representation in Lemma 5.1 in Hamel and Heyde~\cite{HamelHeyde2010}. \smallskip (ii) The preceding proposition also suggests the following alternative path to establishing Lemma~\ref{theo: dual representation 2}: (1) Start with the representation in Lemma~\ref{theo: dual representation 1}. (2) Show that the functionals $\rho_\pi$ there can be replaced by their lower semicontinuous hulls $\lsc(\rho_\pi)$. (3) Show that we can discard from the representation all the functionals $\pi\in K_M^+\setminus\{0\}$ such that $\lsc(\rho_\pi)$ is not proper or, equivalently, $\ext(\pi)\cap\barr(A)=\emptyset$. (4) Use Proposition~\ref{prop: lsc hull} to replace the functionals $\lsc(\rho_\pi)$ with the more explicit functionals $\rho_\pi^\ast$. The advantage of the strategy pursued in the proof of Lemma~\ref{theo: dual representation 2} is that it avoids passing through semicontinuous hulls and the analysis of their properness. \end{remark} \smallskip The representing functionals belonging to the multi-utility representation in Theorem~\ref{cor: second multi utility representation} are, by definition, upper semicontinuous. As a final step, we want to find conditions ensuring a multi-utility representation consisting of continuous functionals only. To achieve this, we exploit the link between the functionals $\rho_\pi$ and their regularizations $\rho_\pi^\ast$ established in Proposition~\ref{prop: lsc hull}. \begin{lemma} \label{lem: continuous representation} Let $\pi\in K_M^+\setminus\{0\}$ be such that $\ext(\pi)\cap\barr(A)\neq\emptyset$. If $\Interior(A)\neq\emptyset$ and $\rho_\pi(x)<\infty$ for every $x\in L$, then $\rho_\pi$ is finite valued and continuous. In particular, $\rho_\pi=\rho^\ast_\pi$. \end{lemma} \begin{proof} First of all, we claim that $\rho_\pi(x)>-\infty$ for every $x\in L$. To see this, take any functional $\psi\in\ext(\pi)\cap\barr(A)$ and note that for every $x\in L$ \[ \rho_\pi(x) \geq \rho^\ast_\pi(x) \geq \sigma_A(\psi)-\psi(x) > -\infty. \] As a result, $\rho_\pi$ is finite valued. Note that, by definition, $\rho_\pi$ is bounded above on $A$ by $0$. Since $A$ has nonempty interior and $\rho_\pi$ is convex, we infer from Theorem 8 in Rockafellar~\cite{Rockafellar1974} that $\rho_\pi$ is continuous. The last statement is a direct consequence of Proposition~\ref{prop: lsc hull}. \end{proof} \smallskip The following multi-utility representation with continuous utility functionals is a direct consequence of Theorem~\ref{cor: second multi utility representation} and Lemma~\ref{lem: continuous representation}. \begin{theorem} \label{cor: third multi utility representation} Assume that $\Interior(A)\neq\emptyset$ and that $\rho_\pi(x)<\infty$ for all $\pi\in K_M^+\setminus\{0\}$ with $\ext(\pi)\cap\barr(A)\neq\emptyset$ and $x\in L$. Then, the preference $\succeq_R$ can be represented by the multi-utility family \[ \cU^{\ast\ast}=\{u_\pi \,: \ \pi\in K_M^+\setminus\{0\}, \ \ext(\pi)\cap\barr(A)\neq\emptyset\}. \] In addition, every element of $\cU^{\ast\ast}$ is finite valued and continuous. \end{theorem} \smallskip We conclude by showing a number of sufficient conditions for the finiteness assumption in Lemma~\ref{lem: continuous representation} to hold. This should be compared with the results in Section 3 in Farkas et al.~\cite{FarkasKochMunari2015}. The recession cone of $A$ is denoted by \[ \rec(A) := \{x\in L \,: \ x+y\in A, \ \forall y\in A\}. \] Note that $\rec(A)$ is the largest convex cone such that $A+\rec(A)\subseteq A$. In particular, if $A$ is a cone, then $\rec(A)=A$. Moreover, for any convex cone $C\subseteq L$ we denote by \[ \qint(C) := \{x\in C \,: \ \psi(x)>0, \ \forall \psi\in C^+\setminus\{0\}\} \] the quasi interior of $C$. Note that we always have $\Interior(C)\subseteq\qint(C)$. \begin{proposition} \label{prop: conditions for continuous representation} Let $\pi\in K_M^+\setminus\{0\}$ satisfy $\ext(\pi)\cap\barr(A)\neq\emptyset$. Then, $\rho_\pi(x)<\infty$ for every $x\in L$ if any of the following conditions holds: \begin{enumerate}[(i)] \item $L=A+M$. \item $M\cap\qint(K)\neq\emptyset$. \item $M\cap\qint(\rec(A))\neq\emptyset$. \end{enumerate} \end{proposition} \begin{proof} The desired assertion clearly holds under (i). Since $K\subseteq\rec(A)$ by assumption (A1), we see that $\qint(K)\subseteq\qint(\rec(A))$. Hence, it suffices to establish that (iii) implies the desired assertion. So, assume that (iii) holds and take $m\in M\cap\qint(\rec(A))$. If $\rho_\pi(x)=\infty$ for some $x\in L$, then we must have $(x+M)\cap A=\emptyset$. It follows from a standard separation result, see e.g.\ Theorem 1.1.3 in Z\u{a}linescu~\cite{Zalinescu2002}, that we find a nonzero functional $\psi\in L'$ satisfying $\psi(x+\lambda m) \leq \sigma_A(\psi)$ for every $\lambda\in\R$. This is only possible if $\psi(m)=0$, which cannot hold because $\psi\in\barr(A)\subseteq(\rec(A))^+$. As a result, we must have $\rho_\pi(x)<\infty$ for every $x\in L$. \end{proof} \section{Applications} \label{sect: applications} In this final section we specify the general dual representation of $R$ to a number of concrete situations. The explicit formulation of the corresponding multi-utility representation can be easily derived as in Theorem~\ref{cor: second multi utility representation} and Theorem~\ref{cor: third multi utility representation}. Throughout the section we consider a probability space $(\Omega,\cF,\probp)$ and fix an index $d\in\N$. For every $p\in[0,\infty]$ and every Borel measurable set $S\subseteq\R^d$ we denote by $L^p(S)$ the set of all equivalence classes with respect to almost-sure equality of $d$-dimensional random vectors $X=(X_1,\dots,X_d):\Omega\to\R^d$ with $p$-integrable components such that $\probp[X\in S]=1$. As usual, we never explicitly distinguish between an equivalence class in $L^p(S)$ and any of its representative elements. We treat $\R^d$ as a linear subspace of $L^p(\R^d)$. For all vectors $a,b\in\R^d$ we set \[ \langle a,b\rangle := \sum_{i=1}^da_ib_i. \] The expectation with respect to $\probp$ is simply denoted by $\E$. For every $p\in[1,\infty]$ the space $L^p(\R^d)$ can be naturally paired with $L^q(\R^d)$ for $q=\frac{p}{p-1}$ via the bilinear form \[ (X,Z)\mapsto\E[\langle X,Z\rangle]. \] Here, we adopt the usual conventions $\frac{1}{0}:=\infty$ and $\frac{\infty}{\infty}:=1$. Finally, for every random vector $X\in L^1(\R^d)$ we use the compact notation \[ \E[X] := (\E[X_1],\dots,\E[X_d]). \] \subsection{Set-valued risk measures in a multi-currency setting} We consider a financial market where $d$ different currencies are traded. Every element of $L^1(\R^d)$ is interpreted as a vector of capital positions expressed in our different currencies at some future point in time. For a pre-specified acceptance set $\cA\subseteq L^1(\R^d)$ we look for the currency portfolios that have to be set up at the initial time to ensure acceptability. \subsubsection{The static case} As a first step, we consider a one-period market with dates $0$ and $1$. In this setting, we focus on the currency portfolios that we have to build at time $0$ in order to ensure acceptability of currency positions at time $1$. This naturally leads to defining the set-valued map $R:L^1(\R^d)\rightrightarrows\R^d$ by \[ R(X) := \{m\in\R^d \,: \ X+m\in\cA\}. \] \smallskip \begin{assumption} In this subsection we work under the following assumptions: \begin{enumerate} \item[(1)] $\cA$ is norm closed, convex, and satisfies $\cA+L^1(\R^d_+)\subseteq\cA$. \item[(2)] $R(X)\notin\{\emptyset,\R^d\}$ for some $X\in L^1(\R^d)$. \end{enumerate} \end{assumption} \smallskip We derive the following representation by applying our general results to \[ (L,L',K,A,M) = \Big(L^1(\R^d),L^\infty(\R^d),L^1(\R^d_+),\cA,\R^d\Big). \] This result should be compared with the dual representation established in Jouini et al.~\cite{JouiniMeddebTouzi2004}, Kulikov~\cite{Kulikov2008}, and Hamel and Heyde~\cite{HamelHeyde2010}. \begin{proposition} \label{prop: application 1} For every $X\in L^1(\R^d)$ the set $R(X)$ can be represented as \[ R(X) = \bigcap_{w\in\R^d_+\setminus\{0\}}\bigcap_{Z\in L^{\infty}(\R^d_+),\,\E[Z]=w}\{m\in\R^d \,: \ \langle m,w\rangle\geq\sigma_\cA(Z)-\E[\langle X,Z\rangle]\}. \] In addition, if $\cA$ is a cone, then we can simplify the above representation using that \[ \sigma_\cA(Z)= \begin{cases} 0 & Z\in\cA^+,\\ -\infty & \mbox{otherwise}. \end{cases} \] \end{proposition} \begin{proof} Note that $K_M^+$ can be identified with $\R^d_+$ and that $\barr(\cA)$ is contained in $L^\infty(\R^d_+)$ by assumption (1). Since, for all $w\in\R^d$ and $Z\in L^\infty(\R^d)$, the random vector $Z$ (viewed as a functional on $L^1(\R^d)$) is an extension of $w$ (viewed as a functional on $\R^d$) precisely when $\E[Z]=w$, the desired representation follows immediately from Lemma~\ref{theo: dual representation 2}. \end{proof} \smallskip \begin{remark} Note that, in the above framework, the set $\qint(K)$ consists of all the random vectors in $L^1(\R^d)$ with components that are strictly positive almost surely and, hence, $M\cap\qint(K)\neq\emptyset$. This can be used to ensure multi-utility representations with continuous representing functionals; see Proposition~\ref{prop: conditions for continuous representation}. \end{remark} \smallskip \begin{example}[{\bf Multidimensional Expected Shortfall}] For every $X\in L^1(\R)$ and every $\alpha\in(0,1)$ we denote by $\ES_\alpha(X)$ the Expected Shortfall of $X$ at level $\alpha$, i.e. \[ \ES_\alpha(X) := -\frac{1}{\alpha}\int_0^\alpha q_X(\beta)d\beta, \] where $q_X$ is any quantile function of $X$. The multi-dimensional acceptance set based on Expected Shortfall introduced in Hamel et al.~\cite{HamelRudloffYankova2013} is given by \[ \cA = \{X\in L^1(\R^d) \,: \ \ES_{\alpha_i}(X_i)\leq0, \ \forall i\in\{1,\dots,d\}\} \] for a fixed $\alpha=(\alpha_1,\dots,\alpha_d)\in(0,1)^d$. Note that assumptions (1) and (2) hold. In particular, we have $R(0)=\R^d_+$. In addition, $\cA$ is a cone. Note that \[ \cZ^\infty_w(\alpha) := \{Z\in L^\infty(\R^d) \,: \ \E[Z]=w\}\cap\cA^+ = \left\{Z\in L^\infty(\R^d_+) \,: \ \E[Z]=w, \ Z\leq\frac{w}{\alpha}\right\} \] for every $w\in\R^d_+$ (where $\frac{w}{\alpha}$ is understood component by component). This follows from the standard dual representation of Expected Shortfall; see Theorem 4.52 in F\"{o}llmer and Schied~\cite{FoellmerSchied2016}. As a result, the dual representation in Proposition~\ref{prop: application 1} reads \[ R(X) = \bigcap_{w\in\R^d_+\setminus\{0\}}\bigcap_{Z\in\cZ^\infty_w(\alpha)}\{m\in\R^d \,: \ \langle m,w\rangle\geq-\E[\langle X,Z\rangle]\} \] for every random vector $X\in L^1(\R^d)$. \end{example} \subsubsection{The dynamic case} As a next step, we consider a multi-period financial market with dates $t=0,\dots,T$ and information structure represented by a filtration $(\cF_t)$ satisfying $\cF_0=\{\emptyset,\Omega\}$ and $\cF_T=\cF$. In this setting, currency portfolios can be rebalanced through time. A (random) portfolio at time $t\in\{0,\dots,T\}$ is represented by an $\cF_t$-measurable random vector in $L^0(\R^d)$. We denote by $\cC_t$ the set of $\cF_t$-measurable portfolios that can be converted into portfolios with nonnegative components by trading at time $t$. This means that, for all $\cF_t$-measurable portfolios $m_t$ and $n_t$, we can exchange $m_t$ for $n_t$ at time $t$ provided that $m_t-n_t \in \cC_t$. The sets $\cC_t$ are meant to capture potential transaction costs. A flow of portfolios is represented by an adapted process $(m_t)$. More precisely, for every date $t\in\{0,\dots,T-1\}$, the portfolio $m_t$ is set up at time $t$ and held until time $t+1$. The portfolio flows belonging to the set \[ \cC := \{(m_t) \,: \ m_t-m_{t+1} \in \cC_{t+1}, \ \forall t\in\{0,\dots,T-1\}\} \] are said to be admissible. The admissibility condition is a direct extension of the standard self-financing property in frictionless markets. \medskip We look for all the initial portfolios that can be rebalanced in an admissible way until the terminal date in order to ensure acceptability. This leads to the set-valued map $R:L^1(\R^d)\rightrightarrows\R^d$ defined by \begin{align*} R(X) &:= \{m\in\R^d \,: \ \exists (m_t)\in\cC, \ n_T\in L^0(\R^d) \,:\, \\ &\phantom{a}\hspace{2cm} m-m_0\in\cC_0, \ m_T-n_T\in\cC_T, \ X+n_T\in\cA\}. \end{align*} In words, the above set consists of all the initial portfolios that give rise, after a convenient exchange at date $0$, to an admissible rebalancing process making the outstanding currency position acceptable after a final portfolio adjustment at time $T$. This setting can be embedded in our framework because we can equivalently write \[ R(X) = \bigg\{m\in\R^d \,: \ X+m\in\cA+\sum_{t=0}^{T}\cC_t\bigg\}. \] \smallskip \begin{assumption} In this subsection we work under the following assumptions: \begin{enumerate} \item[(1)] $\cA$ is norm closed, convex, and satisfies $\cA+L^1(\R^d_+)\subseteq\cA$. \item[(2)] $\cC_t$ is convex and contains $L^0_t(\R^d_+)$ for every $t\in\{0,\dots,T\}$. \item[(3)] $(\cA+\sum_{t=0}^{T}\cC_t)\cap L^1(\R^d)$ is norm closed. \item[(4)] $R(X)\notin\{\emptyset,\R^d\}$ for some $X\in L^1(\R^d)$. \end{enumerate} \end{assumption} \smallskip We derive the following representation by applying our general results to \[ (L,L',K,A,M) = \left(L^1(\R^d),L^\infty(\R^d),L^1(\R^d_+),\bigg(\cA+\sum_{t=0}^{T}\cC_t\bigg)\cap L^1(\R^d),\R^d\right). \] For convenience, we also set \[ \cC^1_{1:T} := \bigg(\sum_{t=1}^{T}\cC_t\bigg)\cap L^1(\R^d). \] For later use note that \[ \barr(\cC^1_{1:T}) \subseteq \barr\left(\sum_{t=1}^T\Big(\cC_t\cap L^1(\R^d)\Big)\right) = \bigcap_{t=1}^T\barr\Big(\cC_t\cap L^1(\R^d)\Big). \] The next result should be compared with the dual representation established in Hamel et al.~\cite{HamelHeydeRudloff2011} in the special setting of Example~\ref{ex: superreplication}. \begin{proposition} \label{prop: application 2} For every $X\in L^1(\R^d)$ the set $R(X)$ can be represented as \[ R(X) = \bigcap_{w\in\R^d_+\setminus\{0\}}\bigcap_{Z\in L^\infty(\R^d_+),\,\E[Z]=w}\{m\in\R^d \,: \ \langle m,w\rangle\geq\sigma_{\cA,\cC}(Z)-\E[\langle X,Z\rangle]\}. \] where we have set for every $Z\in L^\infty(\R^d)$ \[ \sigma_{\cA,\cC}(Z) := \sigma_\cA(Z)+\sigma_{\cC_0}(Z)+\sigma_{\cC^1_{1:T}}(Z) \] In addition, if $\cA$ is a cone, the above representation can be simplified by using that \[ \sigma_\cA(Z)= \begin{cases} 0 & Z\in\cA^+,\\ -\infty & \mbox{otherwise}. \end{cases} \] Moreover, if $\cC_0$ is a cone, then \[ \sigma_{\cC_0}(Z)= \begin{cases} 0 & Z\in L^\infty(\R^d_+), \ \E[Z]\in\cC_0^+,\\ -\infty & \mbox{otherwise}. \end{cases} \] Similarly, if $\cC_t$ is a cone for every $t\in\{1,\dots,T\}$, then \[ \sigma_{\cC^1_{1:T}}(Z)= \begin{cases} 0 & Z\in(\cC^1_{1:T})^+\subseteq\bigcap_{t=1}^T(\cC_t\cap L^1(\R^d))^+,\\ -\infty & \mbox{otherwise}. \end{cases} \] \end{proposition} \begin{proof} The assertion follows from Proposition~\ref{prop: application 1} because $\cC_0\subset\R^d$ implies \[ \bigg(\cA+\sum_{t=0}^{T}\cC_t\bigg)\cap L^1(\R^d) = \cA+\cC_0+\cC_{1:T}^1 \] and observe that $\sigma_{\cA+\cC_0+\cC_{1:T}^1}=\sigma_\cA+\sigma_{\cC_0}+\sigma_{\cC^1_{1:T}}$. \end{proof} \smallskip \begin{remark} Note that, as in the static case, we have $M\cap\qint(K)\neq\emptyset$. This can be used to ensure multi-utility representations with continuous representing functionals; see Proposition~\ref{prop: conditions for continuous representation}. \end{remark} \smallskip \begin{example}[{\bf Superreplication under proportional transaction costs}] \label{ex: superreplication} We adopt the discrete version of the model by Kabanov~\cite{Kabanov1999}. For every $t\in\{0,\dots,T\}$ we say that a set-valued map $S:\Omega\rightrightarrows\R^d$ is $\cF_t$-measurable provided that \[ \{\omega\in\Omega \,: \ S(\omega)\cap\cU\neq\emptyset\}\in\cF_t \] for every open set $\cU\subset\R^d$. In this case, we denote by $L^0(S)$ the set of all random vectors $X\in L^0(\R^d)$ such that $\probp[X\in S]=1$. This set is always nonempty if $S$ has closed values; see Corollary 14.6 in Rockafellar and Wets~\cite{RockafellarWets2009}. Now, let $K_t:\Omega\rightrightarrows\R^d$ be an $\cF_t$-measurable set-valued map such that $K_t(\omega)$ is a polyhedral convex cone (hence $K_t(\omega)$ is closed) containing $\R^d_+$ for every $\omega\in\Omega$ and set \[ \cC_t = L^0(K_t). \] Moreover, we consider the worst-case acceptance set \[ \cA = L^1(\R^d_+). \] Assumptions (1) and (2) are easily seen to be satisfied. Moreover, $\cA$ as well as each of the sets $\cC_t$ is a cone. As proved in Theorem 2.1 in Schachermayer~\cite{Schachermayer2004}, assumption (3) always holds under the so-called ``robust no-arbitrage'' condition. Finally, as $0\in R(0)$, assumption (4) holds if and only if $\R^d$ is not entirely contained in $\sum_{t=0}^T\cC_t$. Note also that $\cA^+=L^\infty(\R^d_+)$. As a result, Proposition~\ref{prop: application 2} yields \[ R(X) = \bigcap_{w\in K_0^+\setminus\{0\}}\bigcap_{Z\in L^\infty(\R^d_+),\ Z\in(\cC_{1:T}^1)^+,\,\E[Z]=w}\{m\in\R^d \,: \ \langle m,w\rangle\geq-\E[\langle X,Z\rangle]\} \] for every $X\in L^1(\R^d)$. The dual elements $Z$ in the above representation can be linked to consistent pricing systems, see e.g.\ Schachermayer~\cite{Schachermayer2004}. To see this, note that, for every $t\in\{0,\dots,T\}$, the set-valued map $K^+_t:\Omega\rightrightarrows\R^d$ defined by \[ K^+_t(\omega)=\big(K_t(\omega)\big)^+ \] is $\cF_t$-measurable, see e.g.\ Exercise 14.12 in Rockafellar and Wets~\cite{RockafellarWets2009}, and such that \[ \Big(\cC_t\cap L^1(\R^d)\Big)^+=L^0(K^+_t)\cap L^\infty(\R^d) \] by measurable selection, see the argument in the proof of Theorem 1.7 in Schachermayer~\cite{Schachermayer2004}. As a result, every dual element $Z$ in the above dual representation satisfies \[ \E[Z]\in K_0^+, \ \ \ \ \ \ Z \in (\cC_{1:T}^1)^+ \subseteq \bigcap_{t=1}^T\Big(\cC_t\cap L^1(\R^d)\Big)^+ \subseteq \bigcap_{t=1}^TL^0(K^+_t). \] This shows that the $d$-dimensional adapted process $(\E[Z\vert\cF_t])$, where the conditional expectations are taken componentwise, satisfies $\E[Z\vert\cF_T]=Z$ and $\E[Z\vert\cF_t] \in L^0(K^+_t)$ for every $t\in\{0,\dots,T\}$ and thus qualifies as a consistent pricing system. In other words, the above dual elements $Z$ can be viewed as the terminal values of consistent pricing systems. \end{example} \smallskip \begin{remark} (i) It is worth noting that our approach provides a different path, compared to the strategy pursued in Schachermayer~\cite{Schachermayer2004}, to establish the existence of consistent pricing systems under the robust no-arbitrage assumption (admitting the closedness of the reference target set). Moreover, by rewriting the above dual representation in terms of consistent pricing systems, we recover the (localization to $L^1(\R^d)$ of the) superreplication theorem by Schachermayer~\cite{Schachermayer2004}. \smallskip (ii) The above dual representation was also obtained in Hamel et al.~\cite{HamelHeydeRudloff2011}. Differently from that paper, we have not derived it from the superreplication theorem in Schachermayer~\cite{Schachermayer2004} but from a direct application of our general results. \end{remark} \subsection{Systemic set-valued risk measures based on acceptance sets} We consider a single-period economy with dates $0$ and $1$ and a financial system consisting of $d$ entities. Every element of $L^\infty(\R^d)$ is interpreted as a vector of capital positions of the various financial entities at time $1$. The individual positions can be aggregated through a function $\Lambda:\R^d\to\R$. For a pre-specified acceptance set $\cA\subseteq L^\infty(\R)$ we look for the cash injections at time $0$ that ensure the acceptability of the aggregated system. This leads to the set-valued map $R:L^\infty(\R^d)\rightrightarrows\R^d$ defined by \[ R(X) := \{m\in\R^d \,: \ \Lambda(X+m)\in\cA\}. \] This setting can be easily embedded in our framework because we can write \[ R(X) = \{m\in\R^d \,: \ X+m\in\Lambda^{-1}(\cA)\}. \] \smallskip \begin{assumption} In this subsection we work under the following assumptions: \begin{enumerate} \item[(1)] $\cA$ is convex and satisfies $\cA+L^\infty(\R_+)\subseteq\cA$. \item[(2)] $\Lambda$ is nondecreasing and concave. \item[(3)] $\Lambda^{-1}(\cA)$ is $\sigma(L^\infty(\R^d),L^1(\R^d))$-closed. \item[(4)] $R(X)\notin\{\emptyset,\R^d\}$ for some $X\in L^\infty(\R^d)$. \end{enumerate} \end{assumption} \smallskip We derive the following representation by applying our general results to \[ (L,L',K,A,M) = \Big(L^\infty(\R^d),L^1(\R^d),L^\infty(\R^d_+),\Lambda^{-1}(\cA),\R^d\Big). \] The result should be compared with the dual representations established in Ararat and Rudloff~\cite{AraratRudloff}. \begin{proposition} \label{prop: systemic} For every $X\in L^\infty(\R^d)$ the set $R(X)$ can be represented as \[ R(X) = \bigcap_{w\in\R^d_+\setminus\{0\}}\bigcap_{Z\in L^1(\R^d_+),\,\E[Z]=w}\{m\in\R^d \,: \ \langle m,w\rangle\geq\sigma_{\Lambda^{-1}(\cA)}(Z)-\E[\langle X,Z\rangle]\}. \] If $\Lambda^{-1}(\cA)$ is a cone, then we can simplify the above representation using that \[ \sigma_{\Lambda^{-1}(\cA)}(Z)= \begin{cases} 0 & Z\in(\Lambda^{-1}(\cA))^+,\\ -\infty & \mbox{otherwise}. \end{cases} \] \end{proposition} \begin{proof} Note that $\Lambda^{-1}(\cA)$ is convex and satisfies $\Lambda^{-1}(\cA)+L^\infty(\R^d_+)\subseteq\Lambda^{-1}(\cA)$ by assumptions (1) and (2). Note also that $K_M^+$ can be identified with $\R^d_+$. In addition, we have $\barr(\Lambda^{-1}(\cA))\subseteq L^1(\R^d_+)$. Since, for all $w\in\R^d$ and $Z\in L^1(\R^d)$, the random vector $Z$ (viewed as a functional on $L^\infty(\R^d)$) is an extension of $w$ (viewed as a functional on $\R^d$) precisely when $\E[Z]=w$, the desired representation follows from Lemma~\ref{theo: dual representation 2}. \end{proof} \smallskip \begin{example}[{\bf Weighted aggregated losses}] Let $\alpha=(\alpha_1,\dots,\alpha_d)\in(1,\infty)^d$ and consider the aggregation function $\Lambda:\R^d\to\R$ defined by \[ \Lambda(x) = \sum_{i=1}^d\max(x_i,0)+\sum_{i=1}^d\alpha_i\min(x_i,0). \] Moreover, define the acceptance set $\cA\subset L^\infty(\R)$ by \[ \cA = \{X\in L^\infty(\R) \,: \ \E[X]\geq0\}. \] Clearly, assumptions (1) and (2) hold. Since $\cA$ is closed with respect to dominated almost-sure convergence and $\Lambda$ is continuous, we easily see that $\Lambda^{-1}(\cA)$ is also closed with respect to dominated almost-sure convergence. Hence, we can repeat the argument in Theorem 3.2 by Delbaen~\cite{Delbaen2002} to conclude that (3) holds; see also Proposition 3.2 in Arduca et al.~\cite{ArducaKochMunari2019}. Finally, note that $R(0)=\{m\in\R^d \,: \ \Lambda(m)\geq0\}$, showing that (4) holds. Now, observe that $\barr(\cA)=\R_+$. The concave conjugate of $\Lambda$ is the map $\Lambda^\bullet:\R^d\to[-\infty,\infty)$ defined by \[ \Lambda^\bullet(z) := \inf_{x\in\R^d}\{\langle x,z\rangle-\Lambda(x)\}. \] It is easy to verify that \[ \Lambda^\bullet(z) = \begin{cases} 0 & \mbox{if} \ z\in[1,\alpha_1]\times\cdots\times[1,\alpha_d],\\ -\infty & \mbox{otherwise}. \end{cases} \] Then, it follows from Proposition 3.13 and Lemma 3.20 in Arduca et al.~\cite{ArducaKochMunari2019} that \[ \sigma_{\Lambda^{-1}(\cA)}(Z)= \begin{cases} 0 & \mbox{if there exists $\lambda\in[0,\infty)$ such that $\lambda e\leq Z\leq\lambda\alpha$},\\ -\infty & \mbox{otherwise}, \end{cases} \] where $e=(1,\dots,1)\in\R^d$. Now, for notational convenience define \[ \cZ^\infty(\alpha) := \{Z\in L^\infty(\R^d_+) \,: \ \exists \lambda\in[0,\infty) \,:\, \lambda e\leq Z\leq\lambda\alpha\}. \] As a result, Proposition~\ref{prop: systemic} yields \[ R(X) = \bigcap_{w\in\R^d_+\setminus\{0\}}\bigcap_{Z\in\cZ^\infty(\alpha),\,\E[Z]=w}\{m\in\R^d \,: \ \langle m,w\rangle\geq-\E[\langle X,Z\rangle]\} \] for every random vector $X\in L^\infty(\R^d)$. \end{example} \begin{comment} \section{Conclusions} \label{sect: conclusions} In this note we established a variety of representations for set-valued risk measures and showed how to use them in order to derive suitable numerical representations for the induced incomplete preferences. Special attention was devoted to obtaining numerical representations expressed in terms of (semi)continuous utility functionals. The most appealing feature of such representing functionals is that they coincide, up to a sign, with scalar risk measures based on acceptance sets, which have been the subject of an intense research activity in the past years. This demonstrates that, under mild assumptions, a set-valued risk measure can be completely characterized by way of familiar scalar risk measures. In the last section, we illustrated our general results in the context of multi-currency markets and systemic risk. A natural direction of future research is the study of (set-valued) optimization problems, e.g.\ portfolio selection or capital allocation, where the incomplete preference induced by a set-valued risk measure appears as a constraint. This is in the spirit of the applications discussed in the recent work by Crespi et al.~\cite{CrespiHamelRoccaSchrage2018}. Another interesting direction is to clarify the connection between the incomplete preferences studied in this note and the notion of time consistency for dynamic set-valued risk measures studied, e.g., in Feinstein and Rudloff~\cite{FeinsteinRudloff2013}, which can be viewed as a form of persistence of the preference relation through time. This link was kindly pointed out by a referee during the review process. \end{comment}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,598
Electric Vehicles Pay Their Fair Share March 10, 2017 | Dane McFarlane | Education Recent tax policy analysis by the Great Plains Institute for Drive Electric Minnesota found that electric cars pay just as much or more taxes as comparable gasoline vehicles. Like most other states, Minnesota uses a tax on the sale of gasoline and other motor fuel to pay for transportation infrastructure like highways and bridges, among other uses. This excise tax is paid by the consumer at the pump at a rate of 28.5 cents per gallon ($0.285 / gal), which means that the more one drives, the more taxes one will pay through the consumption of additional fuel. Since more driving causes more wear and damage to public driving infrastructure, this makes sense. This also means, however, that plug-in electric vehicles (EVs) do not pay any motor fuel excise tax. Recent discussions surrounding 2017 Minnesota legislative policy have called into question whether electric vehicles are paying their fair share for driving on the same roads and bridges as gasoline and diesel vehicles. So, are they? Recent analysis by Drive Electric Minnesota looked at the combination of taxes paid by all vehicles and found that EV owners usually pay just as much or more state vehicle taxes as their fossil fuel counterparts. There are three primary taxes paid by vehicle owners in Minnesota: the Motor Vehicle Sales Tax (MVST), annual motor vehicle registration taxes, and the motor fuel excise tax (gas tax). These three taxes are constitutionally dedicated to be "used solely for highway purposes". Although the motor fuel tax is the largest funding source of the three, it represents only 45% of the total funding. Two of these taxes, the MVST and registration taxes, are based on the retail value of the vehicle, which is often much higher for EVs than for conventional automobiles. Taxes paid by vehicle owners in Minnesota In an analysis of typical 2017 models for both gasoline and electric cars, Drive Electric MN found that the total amount paid by EV owners through the MVST and annual registration fees more than makes up for any loss of government revenue from the lack of gasoline fill-ups. At a rate of 6.5% of sales price for the MVST and 1.25% of vehicle value for the registration tax, models with higher sticker prices will pay more taxes. The annual registration tax applies a 10% discount to the vehicle value each year, until finally in year 11 all vehicles (both gasoline and electric) pay a flat $35 fee. Gasoline cars that drive 10,000 miles per year at 30 miles-per-gallon will pay about $95 in gas taxes each year. Sales and registration taxes paid in first year of ownership As shown in the table above, a typical electric car is significantly more expensive than a comparable conventional automobile. A base model Nissan Leaf S costs $30,680 compared to the Nissan Versa at $11,990, while a base model Chevy Bolt costs $37,495 compared to the Chevy Malibu at $21,680. So, while the conventional cars pay about $80-100 in gas taxes each year, EVs generally pay much more based on their sales price and vehicle value. Even though EV buyers can benefit from a federal tax credit, and some EVs come with manufacturer incentives, taxes are still calculated based on the retail price and does not reflect those cost reductions. Cumulative taxes paid by typical 2017 EVs and gasoline automobiles Some groups and legislators have suggested an additional "EV Fee" that EV owners should pay to make up for any loss of revenue from not paying the gasoline tax. However, Drive Electric MN's analysis shows that EVs already pay their fair share through the Motor Vehicle Sales Tax and annual registration fees. Drive Electric MN took a look at what a theoretical EV Fee would look like if it was calculated as the difference between taxes paid by an EV and a conventional automobile. For the first 10 years, the state would actually need to pay EV owners a rebate based on the additional sales and registration taxes they are required to pay. Total Minnesota taxes paid by a 2017 Nissan Versa and a Nissan Leaf S As shown in the chart above, it is only after 10 years, when both EVs and gasoline cars pay a flat $35 registration fee each year, that owners of conventional automobiles pay more due to the gasoline tax. And even then, the cumulative taxes paid by an EV owner are usually greater. EVs are a new technology, and their retail prices are higher than a comparable conventional vehicle because automotive lithium batteries are still traveling a technology learning curve. This is why the average EV costs $36,644 while the average compact car costs only $20,237, and the average midsize car costs $25,095 (Kelly Blue Book, as of March 2016). Over time, the cost of EVs will decline as EV battery costs decline. Bloomberg New Energy Finance and McKinsey both estimate the EVs will achieve cost-parity with conventional vehicles (without tax credits) by 2025-2030. When that happens, EVs will no longer pay higher than average sales and registration taxes. Right now, however, EVs are already paying their fair share. The Great Plains Institute (GPI) leads Drive Electric Minnesota and is a non-partisan non-profit organization that uses consensus-based strategies to discover and implement politically durable solutions to society's most pressing energy and climate challenges. Learn more at www.betterenergy.org. Electric Vehicle Charging Station Economics: GPI's DC Fast Charging Calculator Making Lemonade from VW's Diesel Emissions Lawsuit New White Paper: Electric Vehicles Have Potential to Transform the Grid with Previously Unrealized Benefits in the Midcontinent
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,351
package models import ( "github.com/coraldane/ops-meta/db" "github.com/coraldane/ops-meta/utils" "github.com/toolkits/logger" ) type User struct { Id int64 UserName string `form:"userName"` LoginPwd string `form:"loginPwd"` RealName string `form:"realName"` PhoneNo string `form:"phoneNo"` Email string `form:"email"` RoleName string `form:"roleName" orm:"default(NORMAL)"` AccountStatus int8 `orm:"default(1)"` } func (u *User) TableUnique() [][]string { return [][]string{ []string{"UserName"}, } } func (this *User) Insert() (int64, error) { this.LoginPwd = utils.Md5Hex(this.LoginPwd) this.RoleName = "NORMAL" this.AccountStatus = 1 return db.NewOrm().Insert(this) } func (this *User) CheckExists() bool { var rowCount int strSql := `select count(*) from t_user where user_name=? ` db.NewOrm().Raw(strSql, this.UserName).QueryRow(&rowCount) return rowCount > 0 } func (this *User) Update() (int64, error) { strSql := `update t_user set real_name=?, phone_no=?, email=?, role_name=? where id=?` result, err := db.NewOrm().Raw(strSql, this.RealName, this.PhoneNo, this.Email, this.RoleName, this.Id).Exec() if nil != err { logger.Errorln("update error", err) return 0, err } return result.RowsAffected() } func (this *User) ChangeLoginPasswd() (int64, error) { result, err := db.NewOrm().Raw(`update t_user set login_pwd=? where user_name=?`, utils.Md5Hex(this.LoginPwd), this.UserName).Exec() if nil != err { return 0, err } return result.RowsAffected() } func CheckLogin(userName, loginPwd string) (*User, error) { var u User strSql := `select id, user_name, real_name, phone_no, email, role_name, account_status from t_user where user_name=? and login_pwd=? and account_status=1` err := db.NewOrm().Raw(strSql, userName, utils.Md5Hex(loginPwd)).QueryRow(&u) return &u, err } func GetUserById(userId int64) *User { var u User strSql := `select id, user_name, real_name, phone_no, email, role_name, account_status from t_user where id=?` err := db.NewOrm().Raw(strSql, userId).QueryRow(&u) if nil != err { logger.Errorln("query error", err) } return &u } func QueryUserList(queryDto QueryUserDto, pageInfo *PageInfo) ([]User, *PageInfo) { var rows []User query := db.NewOrm().QueryTable(User{}) if "" != queryDto.UserName { query = query.Filter("user_name__contains", queryDto.UserName) } if "" != queryDto.RealName { query = query.Filter("RealName", queryDto.RealName) } if "" != queryDto.RoleName { query = query.Filter("RoleName", queryDto.RoleName) } rowCount, err := query.Count() if nil != err { logger.Errorln("queryCount error", err) pageInfo.SetRowCount(0) return nil, pageInfo } pageInfo.SetRowCount(rowCount) _, err = query.OrderBy("Id").Offset(pageInfo.GetStartIndex()).Limit(pageInfo.PageSize).All(&rows, "UserName", "RealName", "PhoneNo", "Email", "RoleName", "AccountStatus") if nil != err { logger.Errorln("QueryUserList error", err) } return rows, pageInfo }
{ "redpajama_set_name": "RedPajamaGithub" }
5,931
\section{Introduction} Most medical treatments are designed for ``average patients". Due to the patients' heterogeneity, ``one size fits all" medical treatment strategies can be very effective for some patients but not for others. For example, a study of colon cancer \cite{tan2012kras} found that patients with a surface protein called KRAS are more likely to respond to certain antibody treatments than those without the protein. Thus exploration of precision medicine has recently gained a significant attention in scientific research. Precision medicine is a medical model that provides tailored health care for each specific patient, which has already demonstrated its success in saving lives \cite{bissonnette2012infectious, kummar2015application}. One of the main goals in precision medicine, from the data analytic perspective, is to estimate the optimal individualized decision rules (IDRs) that can improve the outcome of each individual. \subsection{Estimating optimal IDRs: the expected-outcome approach} \label{sec: Existing Literature in Estimating Optimal IDRs} An IDR is a decision rule that recommends treatments/actions to patients based on the information of their covariates. Consider the data collected from a single-stage randomized clinical trial involving different treatments. Before the trial, a patient's information $X$, such as blood pressure and past medicine history, is recorded. The enrolled patient will be randomly assigned to take a treatment denoted by $A$. After the patient receiving the treatment/action, the outcome ${\cal Z}$ of the patient can be observed. Without loss of generality, we may assume that the larger ${\cal Z}$ indicates the better condition a patient is in. Let ${\rm I}\!{\rm P}$ be the probability distribution of the triplet $Y$ of random variables {\small$(X, A, {\cal Z})$} and let ${\rm I}\!{\rm E}$ be the associated expectation operator, where $X$ is a random vector defined on the covariates space ${\cal X}\subseteq \mathbb{R}^p$, $A$ is a random variable defined on the finite treatment set ${\cal A}$ and ${\cal Z} $ is a scalar random variable representing outcome. The likelihood of $(X, A, {\cal Z})$ under ${\rm I}\!{\rm P}$ is defined as $f_0(x)\,\pi(a \,|\, x)\,f_1(z \,|\, x, a)$, where $f_0(x)$ is the probability density of $X$, $\pi(a \,|\, x)$ is the probability of patients being assigned treatment $a$ given $X = x$ and $f_1(z \,|\, x, a)$ is the conditional probability density of ${\cal Z}$ given covariates $X = x$ and treatment $A = a$. For the clinical trial study, the value of $\pi(a \,|\, x)$ is known; for the observational study, this value can be estimated via various methods such as multinomial logistic regression. An IDR $d$ is defined as a mapping from the covariate space ${\cal X}$ into the action space ${\cal A}$. We let ${\cal D}$ be the class of all measurable functions mapping from ${\cal X}$ into ${\cal A}$; that is, ${\cal D}$ is the class of all measurable IDRs. For any IDR $d \in {\cal D}$, define ${\rm I}\!{\rm P}^{\,d}$ to be the probability distribution under which treatment $A$ is decided by $d$. Then the corresponding likelihood function under ${\rm I}\!{\rm P}^{\,d}$ is $f_0(x)\,{\rm I\!I}(a = d(x))\,f_1(z\,|\, x, a)$, where the indicator function ${\rm I\!I}(a = d(x))$ equals to $1$ if $a = d(x)$ and $0$ otherwise. Note that this is a discontinuous step function. The expected-value function \cite{qian2011performance} based on ${\rm I}\!{\rm P}^{\,d}$ is given as ${\rm I\!E}^{\,d\,}[\,{\cal Z} \,]$, which can be interpreted as the expected outcome under IDR $d$. It is known that if $\pi(a \,|\, X) \geq a_0 > 0$ almost surely (a.s.) for any $a \in {\cal A}$ and some constant $a_0$, then ${\rm I}\!{\rm P}^{\,d}$ is absolutely continuous with respect to ${\rm I}\!{\rm P}$~\cite{qian2011performance}. Thus by the Radon-Nikodym theorem, \begin{equation}\label{value fun} {\rm I\!E}^{\,d\,}[\,{\cal Z} \,] \, =\, {\rm I\!E} \left[\,{\cal Z}\, \frac{\text{d} {\rm I}\!{\rm P}^{\,d\,}}{\text{d} {\rm I}\!{\rm P}}\,\right] \, = \, {\rm I\!E}\left[\,\frac{{\cal Z} \, {\rm I\!I}(A = d(X))}{\pi(A | X)}\,\right]. \end{equation} In particular, ${\rm I\!E}^{\,d\,}[\,c(X) \,] = {\rm I\!E}[\,c(X) \,] $ for any integrable function $c$ of the covariate $X$ \cite{qian2011performance}. Given the triplet $(X, A, {\cal Z})$, an optimal IDR under the expected-value function framework is defined as \[ d_0 \, \in \, \underset{d \in {\cal D}}{\text{argmax}} \ {\rm I\!E}^{\,d\,}[\,{\cal Z} \,]. \] This is the expected-value function maximization approach to the problem of estimating an optimal IDR to date. This approach can be roughly categorized into two main types: model-based and classification-based methods. One of the representative methods for the former approach is Q-learning, which models the conditional mean of the outcome ${\cal Z}$ given $X$ and $A$. The treatment was then searched to yield the largest conditional mean of outcome \cite{watkins1992q,murphy2005generalization,qian2011performance,schulte2014q}. Alternatively, the classification-based method, which was first proposed in \cite{zhao2012estimating}, transforms the problem of maximizing ${\rm I\!E}^{\,d\,}[\,{\cal Z} \,]$ into minimizing a weighted 0--1 loss. Based on this transformation, various classification methods can be used to estimate the optimal IDR \cite{laber2015tree,liu2016robust,zhou2017residual}. Only maximizing the average of outcome under IDR $d$ may be restrictive in precision medicine. For example, when evaluating several treatments' effects on patients, doctors may want to know which treatment does the best to improve the outcome of a higher-risk patient. More importantly, due to the complex decision-making procedure in precision medicine, an ``optimal" IDR that only maximizes the expected outcome of patients may lead to potentially adverse consequences for some patients. Therefore, considering individualized risk exposure is essential in precision medicine. This motivates us to examine the problem of determining optimal IDRs under a broader concept to control the individualized risk of each patient. \subsection{Optimized certainty equivalent} \label{sec: Optimized Certainty Equivalent} Estimating optimal IDRs can be regarded as an individualized decision-making problem. Utility functions have played an important role in such problems since they characterize the preference order over random variables, based on which decisions can be made. Guarding against the hazard of adverse decisions, risk measures are needed to balance the sole maximization of such utilities. This bi-objective consideration is well appreciated in portfolio management, leading to many risk measures since the early days of the mean-variance approach in \cite{markowitz1952portfolio}. We refer the readers to \cite{rockafellar2013fundamental} and references therein for a contemporary perspective of diverse risk measures. Among such measures used in investment and economics, one of the most popular is the conditional-value-at-risk (CVaR) that has been extensively discussed in \cite{rockafellar2000optimization,rockafellar2002conditional}; see the recent survey in \cite{SarykalinUryasev2008survey}. In general, for an essentially bounded random variable ${\cal Z}$ with the property that there exists a large enough scalar $B > 0$ such that the set $\left\{ \, \omega \in \Omega \, \mid \, | \, {\cal Z}(\omega) \, | \, > \, B \right\}$ has measure zero, where $\Omega$ is the sample space on which the random variable ${\cal Z}$ is defined, the $\gamma$-CVaR of ${\cal Z}$ is by definition: \[ \text{CVaR}_{\,\gamma\,}({\cal Z}) \, \triangleq \, \displaystyle{ \sup_{\eta \in \mathbb{R}} } \, \left[ \, \eta - \frac{1}{\gamma}\, {\rm I\!E}\, (\eta -{\cal Z})_+ \, \right], \] with $\gamma \in (0, 1)$ and $t_+ \triangleq \max(t,0)$ for a scalar (or vector) $t$. The smallest maximizer of $\text{CVaR}_\gamma({\cal Z})$ is the $\gamma$-quantile of ${\cal Z}$, which is also known as the value-at-risk (VaR). It turns out that the CVaR is a special case of an {\bf Optimized Certainty Equivalent} (OCE) proposed in \cite{ben1986expected,ben1987penalty,ben2007old} that provides a link between utility and risk measures. In fact, the introduction of the OCE predates the popularity of the CVaR in portfolio management. Let ${\cal U}$ denote the family of utility functions $u : \mathbb{R} \to [ \, -\infty, \, \infty \, )$ that are upper semi-continuous, concave, and non-decreasing with a nonempty effective domain \[ \mbox{dom}(u) \, \triangleq \, \left\{ \, t \in \mathbb{R} \mid u(t) > -\infty \, \right\} \, \neq \, \emptyset \] such that $u(0) = 0$ and $1 \in \partial u(0)$, where $\partial u$ denotes the subdifferential map of $u$. Thus in particular, \[ \left[ \, u(t) \, \geq \, 0, \ \forall \, t \, \geq \, 0 \, \right] \hspace{1pc} \mbox{and} \hspace{1pc} \left[ \, u(t) \, \leq \, t, \ \forall \, t \, \in \, \mathbb{R} \, \right]. \] The OCE of an essentially bounded random variable ${\cal Z}$ is by definition: \[ {\cal O}_u({\cal Z}) \, \triangleq \, \displaystyle{ \sup_{\eta \in \mathbb{R}} } \, \left[ \, \eta + {\rm I\!E}\, u( {\cal Z} - \eta ) \, \right]. \] According to the above cited references, the scalar $\eta$ is interpreted as the present consumption among the uncertain future income ${\cal Z}$. Then the sum $\eta + {\rm I\!E}\, u( {\cal Z} - \eta )$ is the utility-based present value of ${\cal Z}$. Thus the goal of the OCE is to maximize the latter value by choosing an optimal allocation of ${\cal Z}$ between present and future consumption. A particular interest of the OCE is the case where $u(t) = \xi_1 \, \max(0,t) - \xi_2 \, \max(0,-t)$ for some constants $\xi_1$ and $\xi_2$ satisfying $0 \leq \xi_1 \leq 1 \leq \xi_2$. In this case, a maximizer of ${\cal O}_u({\cal Z})$ corresponds to a quantile of the random variable ${\cal Z}$. For $\xi_1 = 0$, ${\cal O}_u({\cal Z})$ reduces to the CVaR. With a proper truncation, a concave quadratic utility function can also satisfy the non-decreasing property, resulting in a mean-variance combination; see \cite[Example~2.2]{ben2007old}. One special property of OCE is that $-{\cal O}_u({\cal Z})$ gives a convex risk measure \cite[Section~2.2]{ben2007old}. One of the limitations of the OCE, when applied to our problem of estimating optimal IDRs, is that it does not take into account covariates for the choice of an optimal allocation between present and future consumption when data on the covariates are available. In this paper, motivated by applications in the field of precision medicine, we {\bf Individualize} the known concept of the OCE to a {\bf Decision-Rule based Optimized Covariate-Dependent Equivalent} (IDR-CDE) that also incorporates domain covariates. The new equivalent not only broadens the traditional expectation--only based criterion in the estimation of the optimal IDRs in precision medicine, but also enriches the combined concept of utility and risk measures and bring them to individual-based decision making. The proposed IDR-CDE is very flexible so that different utility functions will produce different optimal IDRs for various purposes. It turns out that estimating optimal IDRs under the IDR-CDE is a challenging optimization problem since it involves the discontinuous function ${\rm I\!I}(A = d(X))$. A major contribution of our work is that we overcome this technical difficulty by reformulating the estimation problem as a difference-of-convex (dc) constrained dc program under a mild assumption at the population level of the model. This reformulation allows us to employ a dc algorithm for solving the resulting dc program. Numerical results under the settings of binary actions and linear decision rules are presented to demonstrate the performance of our proposed model and algorithm. \subsection{Contributions and organization} The contributions of our paper are in two directions: modeling and optimization. In the area of modeling, we extend the expected-value maximization approach in precision medicine to a more general framework by incorporating risk; see Section~\ref{sec:main}. This is accomplished through the extension of the OCE to the IDR-CDE in which we incorporate domain covariates and individualized decision rules. Properties of the IDR-CDE are derived in Subsection~\ref{subsec:properties}. The optimal IDR problem under the IDR-CDE criterion is formally defined in Subsection~\ref{subsec:the IDE opt}. Two cases of this problem are considered: the decomposable case (Subsection~\ref{subsec:decomposable}) and the general case via empirical maximization. Examples of the IDR-CDE given in Subsection \ref{subsec:examples} conclude the modeling part of the paper. Beginning in Section~\ref{sec:empirical}, the solution of the empirical IDR-CDE maximization is the other major topic of our work. The challenge of this problem is the presence of the discontinuous indicator function in the objective function. The cornerstone of our treatment of this problem is its epigraphical formulation which is valid under a mild assumption at the model's population level. We next introduce a piecewise affine description of the epigraphical constraints from which we obtain a difference-of-convex constrained optimization problem to be solved; see Sections \ref{sec:alg} and \ref{sec:dca}. Although restricted to the empirical IDR-CDE maximization problem, we believe that our novel dc constrained programming treatment of the discontinuous optimization problem on hand can potentially be generalized to the composite optimization of univariate step functions with affine functions. In Section~\ref{sec:Numerical}, we demonstrate the effectiveness of our proposed IDR-CDE optimization over the expected-value maximization via numerical results. \section{The IDR-based CDE}\label{sec:main} In this section, we extend the OCE along two directions. The first extension is to take the expectation ${\rm I\!E}^{\,d\,}$ with respect to decision-rule based probability distribution ${\rm I}\!{\rm P}^{\, d}$ in order to evaluate the outcome under the IDR $d$. The second extension is to allow the deterministic scalar $\eta$ over which the supremum in the OCE is taken to be a family of measurable functions ${\cal F}$ defined on the covariate space ${\cal X}$. This family ${\cal F}$ allows the incorporation of available data representing covariate information for prediction and risk reduction; see the inequality \eqref{ineq:OCE} below. For notational purpose, we let ${\cal L}^{\,r}({\cal X}, \Xi, {\rm I}\!{\rm P}_X) $ be the class of all measurable functions $f$ such that $\int \, |\, f(X) \, |^r \, d \, {\rm I}\!{\rm P}_X \,< \,\infty$ with $r \in [1, \infty]$. Here $({\cal X}, \Xi, {\rm I}\!{\rm P}_X)$ is the measure space with $\Xi$ being the $\sigma$-algebra generated by ${\cal X}$, and ${\rm I}\!{\rm P}_X$ being the corresponding marginal probability measure of $X$. \subsection{Definition and properties} \label{subsec:properties} For an essentially bounded random variable ${\cal Z}$, the {\sl individualized decision-rule based optimized covariate-dependent equivalent} (IDR-CDE) of ${\cal Z}$ under decision rule $d$ with respect to a utility function $u \in {\cal U}$ and a linear space ${\cal F} \subseteq {\cal L}^{\,1}({\cal X}, \Xi, {\rm I}\!{\rm P}_{X})$ is \[ \begin{array}{lll} {\cal O}_{(u, {\cal F})}^{\, d}({\cal Z}) & \triangleq & \displaystyle{ \sup_{\alpha \in {{\cal F}}} } \, \left[ \, {\rm I\!E} \,\alpha(X) + {\rm I\!E}^{\,d\,} u( {\cal Z} - \alpha(X) ) \, \right] \\ [0.2in] & = & \displaystyle{ \sup_{\alpha \in {{\cal F}}} } \, \left[ \, {\rm I\!E} \,\alpha(X) + {\rm I\!E}\left( u( {\cal Z} - \alpha(X) ) \, \displaystyle{ \frac{{\rm I\!I}( A = d(X) )}{\pi(A | X)} } \, \right) \, \right] \\ [0.25in] & = & \displaystyle{ \sup_{\alpha \in {{\cal F}}} } \, {\rm I\!E}\left[ \, \left[ \, \alpha(X) + u( {\cal Z} - \alpha(X) ) \, \right] \, \displaystyle{ \frac{{\rm I\!I}( A = d(X) )}{\pi(A | X)} } \, \right], \end{array} \] where the last equality holds because of {${\rm I\!E}[\alpha(X)] = {\rm I\!E}^d[\alpha(X)]$ and the change of measure}. The space ${\cal F}$ is taken to contain all constant functions and such that the expectations in ${\cal O}_{(u, {\cal F})}^{\, d}({\cal Z})$ are taken over integrable functions. One example of such a space is a family of all bounded measurable functions. We will specify ${\cal F}$ for different utility functions in later discussion. The following proposition gives two preliminary properties of the IDR-CDE. In particular, the inequality (\ref{ineq:OCE}) bounds the IDR-CDE ${\cal O}_{(u, {\cal F})}^{\, d}({\cal Z})$ of the random variable ${\cal Z}$ in terms of the OCE of ${\cal Z}$ in two ways: one is an upper bound in terms of the expected OCE of ${\cal Z}$ conditional on $X$ and $A = d(X)$, and the other one is a lower bound in terms of the decision-rule based OCE of ${\cal Z}$. A notable mention of both bounds is that they are independent of the family ${\cal F}$; see (\ref{ineq:OCE}). \begin{proposition}\label{prop: prem} The following two statements hold. \noindent (a) For any $u \in {\cal U}$, one has ${\cal O}_{(u, {\cal F})}^{\, d}(0) = 0$. \noindent (b) For any linear space ${\cal F}$ containing all constant functions and for which ${\cal O}_{(u, {\cal F})}^{\, d}({\cal Z})$ is finite, \begin{equation}\label{ineq:OCE} {\rm I\!E}\,[\,{\cal O}_u({\cal Z} | X, A = d(X))\,] \, \geq \, {\cal O}_{(u, {\cal F})}^{\, d}({\cal Z}) \, \geq\, \displaystyle{ \sup_{\eta\in\mathbb{R}} } \, {\rm I\!E}^{\, d}\left[ \, \eta + u( {\cal Z} - \eta ) \, \right]. \end{equation} \end{proposition} \begin{proof} (a) Since $u \in {\cal U}$, one has $u(t) \leq t$ and then \[ {\cal O}_{(u, {\cal F})}^{\,d}(0) \, \leq\, \sup_{ \alpha \in {\cal F}} \left\{\,{\rm I\!E}\,[\,\alpha(X)\,] + {\rm I\!E}^{\,d\,}[\,0 - \alpha(X)\,]\right\} \, = \, 0, \] where the last equality holds since ${\rm I\!E}^{\,d\,}(\alpha(X)) = {\rm I\!E}\left[\alpha(X)\right]$. Meanwhile, $u(0) = 0$ leads to \[ {\cal O}_{(u, {\cal F})}^{\,d}(0) \, \geq\, {\rm I\!E}\,[\,0\,] + {\rm I\!E}^{\,d\,}[\,0 - 0\,] \, = \, 0, \] since $0 \in {\cal F}$. Combining the two inequalities gives the statement that ${\cal O}_{(u, {\cal F})}^{\,d}(0) = 0$. \vspace{0.1in} (b) We can write \begin{equation*} \begin{aligned} {\cal O}_{(u, {\cal F})}^d({\cal Z}) & = \,\sup_{ \alpha \in {\cal F}} \left\{\,{\rm I\!E}\left[\;\sum_{a \in {\cal A}}\, {\rm I\!I}(d(X) = a) \,{\rm I\!E}\, \left[\,\alpha(X) + u({\cal Z} - \alpha(X)) \mid X, A = a\,\right] \,\right] \,\right\}\\[0.1in] & =\, \sup_{ \alpha \in {\cal F}} \left\{\, {\rm I\!E}\,[\,{\rm I\!E}\,[\,\alpha(X) + u({\cal Z} - \alpha(X)) \mid X, A = d(X)\,]\,] \,\right\}\\[0.05in] & =\, \sup_{ \alpha \in {\cal F}} \left\{\, {\rm I\!E}\,[\,\alpha(X) +{\rm I\!E}\,[\,u({\cal Z} - \alpha(X)) \mid X, A = d(X)\,]\,] \,\right\}\\[0.05in] & \leq \, {\rm I\!E}\,\left[\,\sup_{ s \in \mathbb{R}} \left\{\,s + {\rm I\!E}\,[\,u({\cal Z} - s) \mid X, A = d(X)\,]\,\right\}\,\right]\\[0.05in] & = \, {\rm I\!E}\,[\, {\cal O}_u({\cal Z} \,|\, X, A = d(X)) \,], \end{aligned} \end{equation*} where the inequality holds because for any $\alpha(X)$, we have $\alpha(X) +{\rm I\!E}\,[\,u({\cal Z} - \alpha(X)) \mid X, A = d(X)\,] \leq \displaystyle{ \sup_{ s \in \mathbb{R}} } \, \left\{\,s + {\rm I\!E}\,[\,u({\cal Z} - s) \mid X, A = d(X)\,]\,\right\}$. The right-hand inequality in (\ref{ineq:OCE}) holds because ${\cal F}$ contains all constant functions. \end{proof} Our proposed IDR-CDE measures the outcome ${\cal Z}$ via the decision-rule based optimal allocation between the covariate-dependent present value $\alpha(X)$ and the future gain ${\cal Z} - \alpha(X)$ under the utility function $u$. Unlike the original OCE, the allocation $\alpha(X)$ depends on the available covariate information $X$ such as environmental factors that can help to decide the optimal allocation. {Take linear regression as an example; if the response ${\cal Z}$ can be predicted by the linear combination of covariates $X$, then covariates $X$ can explain some variability behind ${\cal Z}$; this could result in the reduction in the variance of ${\cal Z}$ given the information of $X$.} Thus considering the broader covariate-based allocation $\alpha(X)$ could improve the allocation and further reduce the risk. This is also demonstrated via Proposition~\ref{prop: prem}, by recalling that the negative of the standard OCE is a risk measure; indeed inequality (\ref{ineq:OCE}) confirms that incorporating covariate information may lead to a reduced risk measure. Proposition~\ref{prop: exchange} provides sufficient conditions for equality to hold between the IDR-CDE and the conditional OCE. Note that ${\cal O}_u({\cal Z} \,|\, X, A = d(X))$ is a random variable; it is the original OCE corresponding to the random variable with distribution being the conditional distribution of the random variable ${\cal Z}$ given $X$ and $A = d(X)$. Thus we may think of it as a conditional OCE. The IDR-CDE preserves many properties of the standard OCE which can be found in \cite{ben2007old}. The following are several of these properties. \begin{proposition}\label{basic prop} Given the two triplets $(X, A, {\cal Z})$ and $(d, u, {\cal F})$, the following properties hold: \begin{itemize} \item[(a)] \textbf{Shift Additivity}: for any essentially bounded random variable ${\cal Z}$ and any measurable function $c \in {\cal F}$ such that $c(X)$ is essentially bounded, ${\cal O}_{(u, {\cal F})}^{\,d}({\cal Z} + c\,(X)) = {\cal O}_u^{\,d}({\cal Z}) + {\rm I\!E}\,[\,c\,(X)\,]$; in particular, ${\cal O}_{(u, {\cal F})}^{\,d}(c\,(X)) = {\rm I\!E}\,[\,c\,(X)\,]$; \item[(b)] \textbf{Consistency}: for any measurable function $\widehat{c}$ defined over ${\cal X} \times {\cal A}$ such that $\widehat{c}\,(X, A)$ is essentially bounded, ${\cal O}_{(u, {\cal F})}^{\,d}(\widehat{c}\,(X, A)) = {\rm I\!E}\,[\,\widehat{c}\,(X, d(X))\,]$; \item[(c).] \textbf{Monotonicity}: for any two essentially bounded random variables ${\cal Z}_1$ and ${\cal Z}_2$ such that ${\cal Z}_1(\omega) \leq {\cal Z}_2(\omega)$ for almost all $\omega\in \Omega$, ${\cal O}_{(u, {\cal F})}^{\,d}({\cal Z}_1) \leq {\cal O}_{(u, {\cal F})}^{\,d}({\cal Z}_2)$; \item[(d).] \textbf{Concavity}: for any two essentially bounded random variables ${\cal Z}_1$ and ${\cal Z}_2$ and any $\lambda \in (0, 1)$, \[ {\cal O}_{(u, {\cal F})}^{\,d}\left(\lambda \, {\cal Z}_1 + (1 - \lambda)\, {\cal Z}_2\right) \, \geq \, \lambda\, {\cal O}_{(u, {\cal F})}^{\,d}({\cal Z}_1) + (1- \lambda)\,{\cal O}_u^{\,d}({\cal Z}_2). \] \end{itemize} \end{proposition} \begin{proof} (a) We have \[ \begin{array}{l} {\cal O}_{(u, {\cal F})}^{\,d}({\cal Z} + c\,(X)) \\[0.05in] \hspace{1pc} = \, \displaystyle\sup_{ \alpha \in {\cal F}} \left\{\, {\rm I\!E}\,[\,\alpha(X)\,] + {\rm I\!E}^{\,d\,}[\, u({\cal Z} + c\,(X)- \alpha(X))\,]\, \right\} \nonumber\\[0.05in] \hspace{1pc} = \, {\rm I\!E}\,[\,c\,(X)\,] + \displaystyle\sup_{ \alpha \in {\cal F}} \left\{\,{\rm I\!E}[\,\alpha(X)- c\,(X)\,] + {\rm I\!E}^{\,d\,}[\,u({\cal Z} + c\,(X)- \alpha(X))\,] \,\right\} \nonumber \\[0.05in] \hspace{1pc} = \, {\rm I\!E}\,[\,c\,(X)\,] + \displaystyle\sup_{ (\alpha - c) \in {\cal F}} \left\{\, {\rm I\!E}\,[\,(\alpha - c)(X)\,] + {\rm I\!E}^{\,d\,}[\,u({\cal Z} - (\alpha - c)(X))\,] \,\right\} \nonumber \\[0.1in] \hspace{1pc} = \, {\rm I\!E}\,[\,c\,(X)\,] + {\cal O}_{(u, {\cal F})}^{\,d}({\cal Z}), \end{array} \] where the third equality holds since ${\cal F}$ is a linear space. \vspace{0.1in} \noindent (b) Since $u(t) \leq t$, we have \[ {\cal O}_{(u, {\cal F})}^{\,d}({\cal Z}) \, \leq\, \sup_{ \alpha \in {\cal F}} \left\{\,{\rm I\!E}\,[\,\alpha(X)\,] + {\rm I\!E}^{\,d\,}[\,{\cal Z} - \alpha(X)\,]\right\} \, = \, {\rm I\!E}^{\,d\,}[\, {\cal Z} \, ], \] where the equality holds because ${\rm I\!E}^{\,d\,}[\,\alpha(X)\,] = {\rm I\!E}\,[\,\alpha(X)\,]$ by the definition of ${\rm I}\!{\rm P}^{\,d}$. Therefore, if ${\cal Z} = \hat{c}(X, A)$ is essentially bounded, then \begin{equation*} \begin{array}{lll} {\cal O}_{(u, {\cal F})}^{\,d}({\cal Z}) & \leq & {\rm I\!E}^{\,d\,}[\,\widehat{c}\,(X, A)\,] \\ [0.1in] & = & {\rm I\!E}\,\left[\,\displaystyle{ \frac{\widehat{c}\,(X, A) \; {\rm I\!I}(d(X) = A)}{\pi(A | X)} } \,\right] \\[0.2in] & = & {\rm I\!E}\,\left[\,\displaystyle{ \frac{\widehat{c}\,(X, d(X)) \; {\rm I\!I}(d(X) = A)}{\pi(A | X)} } \,\right] \, = \, {\rm I\!E}\,\left[\, \widehat{c}\,(X, d(X))\, \right]. \end{array} \end{equation*} Since $u(0) = 0$, by the definition of the supreme in ${\cal O}_{(u, {\cal F})}^{\,d}$, we derive \begin{equation*} \begin{array}{lll} {\cal O}_{(u, {\cal F})}^{\,d}(\widehat{c}\,(X, A)) & \geq & {\rm I\!E}\,[\,\widehat{c}\,(X, d(X))\,] + {\rm I\!E}^{\,d\,}[\,u(\widehat{c}(X, A) - \widehat{c}\,(X, d(X))\,] \\[0.1in] & = & {\rm I\!E}\,[\,\widehat{c}\,(X, d(X))\,] + {\rm I\!E}^{\,d\,}\,[\,u(\widehat{c}\,(X, d(X)) - \widehat{c}\,(X, d(X)))\,] \\ [0.1in] & = & {\rm I\!E}\,[\,\widehat{c}\,(X, d(X))\,]. \end{array} \end{equation*} Thus, ${\cal O}_{(u, {\cal F})}^{\,d}(\widehat{c}\,(X, A)) = {\rm I\!E}\,[\,\widehat{c}\,(X, d(X))\,]$.\\% In particular, ${\cal O}_u^{\,d}(c\,(X)) = {\rm I\!E}\,[\,c\,(X)\,]$.\\ \noindent (c) If ${\cal Z}_1 \leq {\cal Z}_2$, then ${\cal Z}_1 - \alpha(X) \leq {\cal Z}_2 - \alpha(X)$ for $\alpha \in {\cal F}$. Since $u \in U_0$ is a non-decreasing utility function, it follows that \begin{equation*} \begin{aligned} {\cal O}_{(u, {\cal F})}^{\,d}({\cal Z}_1) &= \sup_{\alpha \in {\cal F}} \left\{{\rm I\!E}\,[\,\alpha(X)\,] + {\rm I\!E}^{\,d\,}[\,u({\cal Z}_1 - \alpha(X))\,]\, \right\} \\[0.1in] &\leq \sup_{\alpha \in {\cal F}} \left\{\,{\rm I\!E}\,[\,\alpha(X)\,] + {\rm I\!E}^{\,d\,}[\,u({\cal Z}_2 - \alpha(X)) \,] \, \right\} \, =\, {\cal O}_{(u, {\cal F})}^{\,d}({\cal Z}_2). \end{aligned} \end{equation*} \noindent (d) For any $\lambda \in (0, 1)$, denote a random variable ${\cal Z}_\lambda \,\triangleq \, \lambda\, {\cal Z}_1 + (1 - \lambda)\,{\cal Z}_2$ and a measurable function $\alpha_\lambda(X) \,\triangleq \, \lambda \,\alpha_1(X) + (1-\lambda)\,\alpha_2(X)$. Clearly ${\cal Z}_\lambda$ is essentially bounded and $\alpha_\lambda(X) \in {\cal F}$. Then by the concavity of $u$, we have \begin{equation*} \begin{array}{l} {\rm I\!E}\,[\,\alpha_\lambda (X)\,] + {\rm I\!E}^{\,d\,}[\,u({\cal Z}_\lambda - \alpha_\lambda(X))\,] \, \geq \, \lambda \left(\,{\rm I\!E}\,[\,\alpha_1(X)\,] + {\rm I\!E}^{\,d\,}[\,u({\cal Z}_1 - \alpha_1(X))\,]\,\right) + \\ [0.1in] \hspace{2.2in} (1-\lambda)\left({\rm I\!E}\,[\,\alpha_2(X)\,] + {\rm I\!E}^{\,d\,}[\,u({\cal Z}_2 - \alpha_2(X))\,]\,\right). \end{array} \end{equation*} Taking supremum over $\alpha_1$ and $\alpha_2$ on both sides, we may derive the stated result. \end{proof} Properties (a) and (b) extend corresponding results of the original OCE \cite[Theorem~2.1]{ben2007old} from a constant $\eta$ to a measurable function that depends on $X$ and $A$; properties (c) and (d) are essentially the same as those in \cite[Theorem~2.1]{ben2007old}. These properties justify the use of the IDR-CDE in decision making. Shift Additivity means if the outcome is shifted by some function over covariates, the IDR-CDE measure is shifted by the average of this function. Thus the IDR $d$ is invariant under such a shift. Consistency means that to evaluate the IDR-CDR of a measurable function over ${\cal X} \times {\cal A}$ is equivalent to evaluating the expectation of this random function when the action follows the decision rule $d$. Monotonicity and concavity have the same respective meanings as the OCE: the former guarantees a larger CDE for a (stochastically) larger outcome; the latter ensures that the IDR-CDE of a convex combination of two outcomes given a decision rule $d$ is always better than only considering each single outcome separately; this property encourages the simultaneous combination of multiple outcomes for better results. \subsection{The IDR optimization problem} \label{subsec:the IDE opt} We employ the IDR-CDE to evaluate the decision rule $d$ of the outcome ${\cal Z}$ via its optimized covariate equivalent, with the goal of estimating an optimal IDR that maximizes the IDR-CDE given the pair $(u,{\cal F})$ in the following sense. \begin{definition}\rm Given the triplet $(X, A, {\cal Z})$, the pair $(u,{\cal F})$, and the family ${\cal D}$ of decision rules, an optimal IDR is a rule $d^*$ such that \[ d^\ast(X) \, \in \, \operatornamewithlimits{argmax}_{d \in {\cal D}} \ {\cal O}^{\,d}_{(u,{\cal F})}({\cal Z}), \] if such a maximizer exists. \hfill $\Box$ \end{definition} \noindent Thus we can compute $d^\ast(X)$ and the optimal allocation $\alpha^\ast(X)$ jointly by solving \begin{equation}\label{eq: goal} \sup_{d \in {\cal D}, \alpha \in {\cal F}} {\rm I\!E}\,[\,\alpha(X)\,] + {\rm I\!E}^{\,d\,}[\,u({\cal Z} - \alpha(X))\,]. \end{equation} The rest of the paper is devoted to the solution of this optimization problem. The discussion is divided into two cases depending on whether we can exchange the supremum over $\alpha$ and the expectation ${\rm I\!E}^{\, d}$ in ${\cal O}^{\,d}_{(u,{\cal F})}({\cal Z})$. The exchangeable case requires the theory of decomposable space from variational analysis; this leads to an ``explicit'' determination of the optimal IDR via the evaluation of the conditional OCE given the covariate $X$ and the finite actions $a \in {\cal A}$; see Proposition~\ref{thm: optimal IDR}. The general case requires the numerical solution of an empirical optimization problem obtained from sampling of the covariates among available data. \subsection{Decomposable space and normal integrand} \label{subsec:decomposable} In order to exchange the supreme over $\alpha(X)$ and expectation with respect to ${\rm I\!E}^{\, d}$, we need to first introduce the concept of a decomposable space and the normal integrand. \begin{definition} \rm \cite[Definitions~14.59 and 14.27]{rockafellar2009variational}. A space ${\cal M}$ of ${\cal B}_0$-measurable functions is {\sl decomposable} relative to an underlying measure space $(\Omega_0, {\cal B}_0, \mu)$ if for every function $x_0 \in {\cal M}$, every set $G \in {\cal B}_0$ with $\mu(G) < \infty$ and any bounded, measurable function $x_1$, the function $x_2(t) = x_0(t){\rm I\!I}(t \not\in G) + x_1(t) {\rm I\!I}(t \in G)$ belongs to ${\cal M}$. An extended-value function $f: \Omega_0 \times \mathbb{R} \rightarrow (-\infty, \infty]$ is a {\sl normal integrand} if its epigraphical mapping $\omega \rightarrow \mbox{epi } f(\omega,\cdot)$ is closed-valued and measurable. \hfill $\Box$ \end{definition} The space ${\cal L}^{\,r}({\cal X}, \Xi, {\rm I}\!{\rm P}_{\cal X})$ is decomposable for $r \in [ 1, \infty ]$ but the family of constant functions is not decomposable. These facts will be used in the examples to be discussed in the next subsection. We will employ the following simplified version of \cite[Theorem~14.60]{rockafellar2009variational} that provides the required conditions for the exchange of the supremum and expectation in our context. \begin{theorem} \label{th:normal integrand} Let ($\Omega_0$, ${\cal B}_0$, $\mu$) be a probability measure space, and ${\cal M}$ be a decomposable space of ${\cal B}_0$-measurable functions. Let $f: \Omega_0 \times \mathbb{R} \rightarrow (-\infty, \infty]$ be a normal integrand; let the integral functional $I_f(x) = \int_{\Omega_0}f(x(\omega), \omega) d\mu(\omega)$ be defined on ${\cal M}$. The following two statements hold: (a) $\displaystyle{ \inf_{x \in {\cal M}} } \, \int_{\Omega_0}f(x(\omega), \omega) d\mu(\omega) = \int_{\Omega_0} \displaystyle{ \inf_{s \in \mathbb{R}} } \, f(s, \omega) d\mu(\omega)$ as long as $I_f(x)$ is finite; and (b) $x_0 \in \underset{x \in {\cal M}}{\text{argmin}}\, I_f(x) \Longleftrightarrow x_0(\omega) \in \underset{s \in \mathbb{R}}{\text{argmin}}\, f(s, \omega)$ almost surely. \hfill $\Box$ \end{theorem} \noindent The following proposition shows that if ${\cal F}$ is decomposable, then equality holds between the IDR-CDE and the conditional OCE. \begin{proposition} \label{prop: exchange} If ${\cal F}$ is a decomposable space relative to $({\cal X}, \Xi, {\rm I}\!{\rm P}_X)$, then \[ {\cal O}_{(u, {\cal F})}^{\,d}({\cal Z}) = {\rm I\!E}\,[\,{\cal O}_u({\cal Z} \,|\, X, A = d(X)) \,]. \] \end{proposition} \begin{proof} Note that ${\rm I\!E}\,[\,\alpha(X) + u({\cal Z} - \alpha(X)) \mid X, A = d(X)\,]$ is measurable with respect to $X$ and upper semi-continuous with respect to $\alpha(X)$ for any $X$, thus is a normal integrand \cite[Example~14.31]{rockafellar2009variational}. Hence we have \begin{equation*}\label{normal integrand} \begin{aligned} {\cal O}_{(u, {\cal F})}^d({\cal Z}) & =\, \sup_{ \alpha \in {\cal F}} \left\{\, {\rm I\!E}\,[\,{\rm I\!E}\,[\,\alpha(X) + u({\cal Z} - \alpha(X)) \mid X, A = d(X)\,]\,] \,\right\}\\ & =\, {\rm I\!E}\,\left[\,\sup_{ s \in \mathbb{R}} \left\{\,s + {\rm I\!E}\,[\,u({\cal Z} - s) \mid X, A = d(X)\,]\,\right\}\,\right] \\[0.05in] & = \, {\rm I\!E}\,[\, {\cal O}_u({\cal Z} \,|\, X, A = d(X)) \,], \end{aligned} \end{equation*} where the second equality is by Theorem~\ref{th:normal integrand} because ${\cal F}$ is decomposable and ${\cal Z}$ is bounded. \end{proof} \begin{remark}\rm Since the conditional OCE is independent of the space ${\cal F}$, it follows that so is ${\cal O}_{(u, {\cal F})}^d({\cal Z})$ provided that ${\cal F}$ is decomposable relative to $({\cal X}, \Xi, {\rm I}\!{\rm P}_X)$. Thus, in the following, if we specify ${\cal F}$ to be decomposable, then we omit ${\cal F}$ and write the IDR-CDE of the random variable ${\cal Z}$ as ${\cal O}_{u}^d({\cal Z})$. \hfill $\Box$ \end{remark} As a result of Proposition \ref{prop: exchange}, we can characterize the optimal IDR explicitly if ${\cal F}$ is a decomposable space. We recall that ${\cal A}$ is a finite set. \begin{proposition}\label{thm: optimal IDR} For a given decomposable space ${\cal F}$ and utility function $u \in {\cal U}$, an optimal IDR is given by \begin{equation} \label{eq:optimal IDR decomposable} d^{\,\ast}(X) \in \operatornamewithlimits{argmax}_{a \in {\cal A}} \ {\cal O}_u({\cal Z} \,|\, X, A = a). \end{equation} \end{proposition} \begin{proof} By the definition of ${\cal O}^{\,d}_u({\cal Z})$, we have for any $d \in {\cal D}$, \[ \begin{array}{lll} {\rm I\!E}\,\left[\,{\cal O}_u({\cal Z} \,|\, X, A = d(X)) \,\right] & = & {\rm I\!E}\,\left[\,\displaystyle{ \sum_{a \in {\cal A}} } \, {\rm I\!I}(d(X) = a) \, {\cal O}_u({\cal Z} \, | \, X, A = a) \, \right] \\ [0.2in] & \leq & {\rm I\!E} \, \left[ \, \displaystyle{ \sum_{a \in {\cal A}} } \, {\rm I\!I}(d(X) = a) \, \displaystyle{ \max_{a^{\prime} \in {\cal A}} } \, {\cal O}_u({\cal Z} \, | \, X, A = a^{\prime}) \, \right] \\ [0.2in] & = & {\rm I\!E} \, \left[ \, \left( \, \displaystyle{ \max_{a^{\prime} \in {\cal A}} } \, {\cal O}_u({\cal Z} \, | \, X, A = a^{\prime}) \, \right) \, \displaystyle{ \sum_{a \in {\cal A}} } \, {\rm I\!I}(d(X) = a) \, \right] \\ [0.2in] & = & {\rm I\!E} \, \left[ \, \displaystyle{ \max_{a^{\prime} \in {\cal A}} } \, {\cal O}_u({\cal Z} \, | \, X, A = a^{\prime}) \, \right]. \end{array} \] Therefore if (\ref{eq:optimal IDR decomposable}) holds, then $d^{\, \ast}$ is maximizing. Such a $d^{\, \ast}$ is a measurable function because being an optimal IDR, $d^{\, \ast}(X) = a$ if and only if ${\cal O}_u({\cal Z} \,|\, X, A = a) \geq \displaystyle{ \max_{a^{\, \prime} \neq a} } \, {\cal O}_u({\cal Z} \,|\, X, A = a^{\, \prime})$ and ${\cal O}_u({\cal Z} \,|\, X, A = a) \geq \displaystyle{ \max_{a^{\, \prime} \neq a} } \, {\cal O}_u({\cal Z} \,|\, X, A = a^{\, \prime})$ is a measurable set with respect to $X$. \end{proof} \begin{remark}\rm The explicit expression of an optimal IDR is valid only when the space ${\cal F}$ is decomposable. {If the conditional distribution of ${\cal Z}$ given $X$ and $A = a$ is known, then it is possible to compute the individualized OCE ${\cal O}_u({\cal Z} \,|\, X, A = a)$ directly. For example, if we make certain parametric assumptions on this conditional distribution, we may be able to estimate these parameters based on the collected data and obtain optimal IDRs based on Proposition~\ref{thm: optimal IDR}. This is similar to the model-based methods in the literature of the expected-value function maximization approach. However, the empirical performance could be affected by the possible model misspecification. Therefore the individualized OCE ${\cal O}_u({\cal Z} \,|\, X, A = a)$ is primarily a conceptual notion and the expression (\ref{eq:optimal IDR decomposable}) is mainly for interpretation.} \hfill $\Box$ \end{remark} According to Proposition \ref{thm: optimal IDR}, an optimal IDR under our proposed CDE can be obtained by choosing the decision rule with the largest individualized OCE. In the next subsection, we will characterize the IDR-OCE via several illustrative examples for both decomposable and non-decomposable families of covariate functions. \subsection{Illustrative examples} \label{subsec:examples} We present several common utility functions to further explain the IDR-CDE for individualized decision making. We will focus on two families: ${\cal L}^{\, r}({\cal X}, \Xi, {\rm I}\!{\rm P}_{X})$ for some $r \in [1, \infty]$ and a family of constant function which we denote ${\cal F}_c$. The former family is a decomposable linear space and the latter family is not decomposable. \begin{example}[Identity utility function]\label{ex 1}\rm Let $u(t) = t$, then by the definition, we can obtain ${\cal O}_u^{\,d}({\cal Z}) = {\rm I\!E}^{\,d\,}[\,{\cal Z}\,]$ for both families $L^1({\cal X}, \Xi, {\rm I}\!{\rm P}_{X})$ and ${\cal F}_c$. This recovers the expected-value maximization framework in the existing literature of precision medicine. By Proposition~\ref{thm: optimal IDR}, for the family ${\cal L}^{\, r}({\cal X}, \Xi, {\rm I}\!{\rm P}_{X})$, an optimal IDR under the identity utility function is given by: \[ d^{\,\ast}(X) \in \displaystyle\operatornamewithlimits{argmax}_{a \in {\cal A}} {\rm I\!E}\,[\,{\cal Z} \,|\, X, A = a \,] \, , \] which is equivalent to the action with the largest expected outcome ${\cal Z}$ among all the actions given covariates $X$. \hfill $\Box$ \end{example} \begin{example}[Piecewise Linear Utility Function]\label{ex 2}\rm Let \[u(t) = \xi_1 \max(0, t) - \xi_2 \max(0, -t),\hspace{1pc}\mbox{where} \;\, 0 \leq \xi_1 < 1 < \xi_2.\] It can be verified that $u \in U_0$. \vspace{0.1in} \noindent {\bf (a) Decomposable space: ${\cal F} = L^{\,1}({\cal X}, \Xi, {\rm I}\!{\rm P}_{X})$}. The corresponding IDR-CDE is \begin{equation} \label{CVaR} {\cal O}_u^{\,d}({\cal Z}) = \sup_{ \alpha \in {\cal F}} \left\{ \begin{array}{ll} \,{\rm I\!E}\,[\,\alpha(X)\,] + \\[0.1in] {\rm I\!E}^{\,d\,}[\,\xi_1 \max\,(\,0\, ,\, {\cal Z} - \alpha(X)\,) - \xi_2 \max\,(\,0\, , \, \alpha(X) - {\cal Z}\,)\,] \, \end{array} \right\}. \end{equation} Based on Proposition~\ref{prop: exchange} we can write it as: with $\gamma \triangleq \displaystyle{ \frac{1 - \xi_1}{\xi_2-\xi_1} }$. Then the ${\cal O}_u^{\,d}({\cal Z})$ is equal to \[ \begin{array}{l} {\rm I\!E}\,\left[\,\sup_{ s \in \mathbb{R}} \big\{\, s + {\rm I\!E}\,[\,\xi_1\,\max\,(\,0, {\cal Z} - s\,) - \xi_2\,\max\,(\,0\,,\, s - {\cal Z}\,) \mid X, A = d(X)\,]\,\big\}\,\right]\\[0.1in] = \, \xi_1{\rm I\!E}^{\,d\,}[\, {\cal Z}\,] + (1-\xi_1)\,{\rm I\!E}\,\left[\, \sup_{ s \in \mathbb{R}} \left\{\, s - \frac{1}{\gamma}\, {\rm I\!E}\,[ \,\max\,(\,0\,,\, s - {\cal Z}\,) \mid X, A = d(X)\,]\right\}\,\right] \\[0.1in] = \, \xi_1\, {\rm I\!E}^{\,d\,}[\, {\cal Z}\,] + (1-\xi_1)\,{\rm I\!E}\,[\, \text{CVaR}_{\,\gamma\,}({\cal Z} \mid X, A = d(X))\,], \end{array} \] where given $X$, the corresponding supremum is attained at the $\gamma$-quantile of conditional distribution of ${\cal Z}$ on $X$ and $A = d(X)$ almost surely. Therefore, under the piecewise affine utility function, ${\cal O}_u^{\,d}({\cal Z})$ can be interpreted as a convex combination of the expected value of ${\cal Z}$ and its expected CVaR given IDR $d$. Thus this ${\cal O}_u^{\,d}({\cal Z})$ considers both ${\rm I\!E}^{\,d\,}[\, {\cal Z}\,]$ and CVaR of the outcome simultaneously. In particular, when $\xi_1 = \xi_2 = 1$, this recovers Example~\ref{ex 1}. By Proposition \ref{thm: optimal IDR}, a corresponding optimal IDR is \begin{equation} \label{ex 2 a} d^{\,\ast}(X) \in \displaystyle\operatornamewithlimits{argmax}_{a \in {\cal A}}\, \big\{\, \xi_1 \,{\rm I\!E}\,[\,{\cal Z} \,|\, X, A= a\,] + (1-\xi_1)\, \text{CVaR}_{\,\gamma}({\cal Z} \,|\, X, A = a)\,\big\}. \end{equation} Therefore, under this piecewise affine utility function, an optimal IDE is to choose the action with the largest convex combination of expected outcome and CVaR of outcome ${\cal Z}$ among all the actions given covariates $X$. \hfill $\Box$ \vspace{0.1in} \noindent {\bf (b) Family of constant functions: ${\cal F} = {\cal F}_c$}. The IDE-CDE reduces to \cite[Example~2.3]{ben2007old} with IDR $d$ involved: \[ \begin{aligned} {\cal O}_{(u, {\cal F}_c)}^{\,d}({\cal Z}) &= \, \sup_{c \in \mathbb{R}} \left\{\, c + {\rm I\!E}^{\,d\,}[\,\xi_1 \, \max\,(\,0\,,\, c - {\cal Z}\,) - \xi_2\, \max\,(\,0\,,\, c - {\cal Z}\,)\,] \, \right\}\\[0.05in] & = \, \xi_1 \, {\rm I\!E}^{\,d\,}[\,{\cal Z}\,] + (1-\xi_1)\,\sup_{c \in \mathbb{R}}\left\{\,c - \frac{\xi_2-\xi_1}{1-\xi_1}\,{\rm I\!E}^{\,d\,}[\,\max\,(\,0\, , \, c- {\cal Z}\,)\,] \,\right\}. \end{aligned} \] The supremum in the right-hand side is any $c^\ast$ satisfying ${\rm I}\!{\rm P}^{\,d\,}({\cal Z} \leq c^\ast) \geq \gamma$ and ${\rm I}\!{\rm P}^{\,d\,}({\cal Z} \geq c^\ast) \leq 1-\gamma$, which is the $\gamma$-quantile of ${\cal Z}$ under the probability distribution ${\rm I}\!{\rm P}^{\,d}$, denoted by $Q^d_\gamma({\cal Z})$. The corresponding maximum value is $\xi_1\, {\rm I\!E}^{\,d\,}[\,{\cal Z}\,] + (1-\xi_1)\,\text{CVaR}^d_{\,\gamma}({\cal Z})$. By definition, an optimal IDR under ${\cal F}_c$ is given by \[ d^\ast \in \displaystyle\operatornamewithlimits{argmax}_d\,\left\{\xi_1{\rm I\!E}^{\,d\,}[\,{\cal Z}\,] + (1-\xi_1)\,\text{CVaR}^{\,d}_\gamma({\cal Z})\,\right\}. \] While this expression is insightful, the above optimal IDR $d^\ast$ does not have an explicit form as \eqref{ex 2 a} since Proposition \ref{thm: optimal IDR} no longer holds by the fact that ${\cal F}_c$ is not a decomposable space. \hfill $\Box$ \end{example} \begin{example}[Quadratic Utility]\label{ex 3}\rm Let \[ u(t) \, = \, \left\{ \begin{array}{ll} t - \displaystyle{ \frac{1}{2\tau} } \, t^2 & \mbox{if $t \, \leq \, \tau$} \\ [0.1in] \tau/2 & \mbox{otherwise}, \end{array} \right\}, \ \mbox{where} \ \tau \, = \, \displaystyle{ \sup_{\omega \in \Omega} } \, {\cal Z}(\omega) - \displaystyle{ \inf_{\omega \in \Omega} } \, {\cal Z}(\omega), \] be a quadratic function truncated to be an admissible utility function in the family ${\cal U}$ and to adopt to the range of the random outcome ${\cal Z}$. Note that $u$ is continuously differentiable with derivative $u^{\, \prime}(t) = \left( \, 1 - \displaystyle{ \frac{t}{\tau} } \, \right) \, {\rm I\!I}(t \leq \tau)$. \vspace{0.1in} \noindent {\bf (a) Decomposable space: ${\cal F} = {\cal L}^{\,2}({\cal X}, \Xi, {\rm I}\!{\rm P}_{X})$}. By Proposition \ref{prop: exchange}, we have, \[ \begin{array}{ll} {\cal O}_u^{\,d}({\cal Z}) & = \, {\rm I\!E} \, \left[ \, \displaystyle{ \sup_{s \in \mathbb{R}} } \, \left\{ \, s + {\rm I\!E} \, \left[ \, u({\cal Z} - s) \, \mid \, X, A = d(X) \, \right] \, \right\} \, \right] \\ [0.2in] & = \, {\rm I\!E}^{\,d\,}[\,{\cal Z}\,] - {\rm I\!E}\,\left[\, \displaystyle{ \frac{1}{2\tau} } \,{\rm I\!E}\,\left[\,\left(\,{\cal Z} - {\rm I\!E}\,[\,{\cal Z} \,|\, X, A = d(X)\,]\,\right)^2 \,|\, X, A = d(X)\,\right]\,\right] \\ [0.2in] & = \, {\rm I\!E}^{\,d\,}[\,{\cal Z}\,]- \displaystyle{ \frac{1}{2\tau} } \, {\rm I\!E}\,\left[\,\text{var}({\cal Z} \, |\, X, A = d(X))\,\right], \end{array} \] where the supreme $\alpha^\ast(X) = {\rm I\!E}\,[\,{\cal Z} \,|\, X, A = d(X)\,]\;$ almost surely and $\text{var}(\bullet)$ is the variance of a random variable. The second equality is based on \cite[Remark 2.1]{ben2007old} by noting that $1 + {\rm I}\!{\rm E}\left[ u^{\, \prime}({\cal Z} - \alpha^*(X)) \right] = 0$. The interchange between expectation and derivative is justified by the dominated convergence theorem under the restriction that $s \in \left[ \, \displaystyle{ \inf_{\omega \in \Omega} } \, {\cal Z}(\omega), \, \displaystyle{ \sup_{\omega \in \Omega} } \, {\cal Z}(\omega) \, \right]$. Thus ${\cal O}_u^{\,d}({\cal Z})$ can be interpreted as the (individualized) mean-variance risk measure under the decision rule $d$, generalizing the mean-variance criterion in the absence of $A$ and $X$, which is frequently used in portfolio selection. An optimal IDR is given by \[ d^{\,\ast}(X) \,\in \displaystyle\operatornamewithlimits{argmax}_{a \in {\cal A}}\, \left\{ \, {\rm I\!E}\,[\,{\cal Z} \,|\, X, A=a\,] - \displaystyle{ \frac{1}{2\tau} } \, \text{var}\,[\, {\cal Z} \, |\, X, A = a\, ]\, \right\}, \] which suggests the optimal action to maximize the expected outcome balanced with the variance given covariates $X$. \vspace{0.1in} \noindent {\bf (b) Family of constant functions: ${\cal F} = {\cal F}_c$}. Similar to part (a) above, direct computation yields $ {\cal O}_u^{\,d}({\cal Z}) = {\rm I\!E}^{\,d\,}[\,{\cal Z}\,] - \displaystyle{ \frac{1}{2\tau} } \, \text{var}^{\, d}(Z)$ with $c^\ast = {\rm I\!E}^{\, d}[{\cal Z}]$, where $\text{var}^{\, d}(Z)$ denotes the variance of a random variable ${\cal Z}$ under ${\rm I}\!{\rm P}^{\, d}$. An optimal IDR under ${\cal F}_c$ is \[ \displaystyle\operatornamewithlimits{argmax}_d \left\{{\rm I\!E}^{\,d\,}[\,{\cal Z}\,]-\frac{1}{2\tau}\text{var}^{\,d} \left(Z\right)\right\}, \] which requires further evaluation by a numerical procedure. \hfill $\Box$ \end{example} From Example~\ref{ex 2} and Example~\ref{ex 3}, we see that one of the differences between a covariate-dependent $\alpha(X)$ and a constant $\alpha(X) \in {\cal F}_c$ lies in that for the former, the IDR-CDE considers expected individualized OCE given the decision rule $d$, but for a constant $\alpha$, in contrast, the IDR-CDE considers only the OCE of the random variable ${\cal Z}$ under ${\rm I}\!{\rm P}^{\, d}$. {To further understand this difference, consider a toy example with ${\cal Z} = X_1A + \varepsilon$, where both $X_1$ and $\varepsilon$ independently follow the standard normal distribution. Suppose we use the utility function in Example~\ref{ex 2}(b) with $\xi_1 = 0$ and $\xi_2 = 2$ to evaluate an IDR $d(X_1) = 1$. By calculation, $c^\ast = 0$ and thus we are focused on the median of ${\cal Z}$ under the probability distribution ${\rm I}\!{\rm P}^{\, d}$. The corresponding ${\cal O}_{(u, {\cal F}_c)}^{\,d}({\cal Z}) = {\rm I\!E}[{\rm I\!E}[{\cal Z}{\rm I\!I}({\cal Z} \leq 0)| X, A = 1]]$. If we have one patient with covariate $X_1 = -2$, then ${\rm I}\!{\rm P}({\cal Z} \leq 0 | X_1 = -2, A = 1) \approx 84\%$. For this patient, ${\cal O}_{(u, {\cal F}_c)}^{\,d}({\cal Z})$ evaluates the outcome lower than about $84\%$-quantile, which is not satisfactory.} As a result, we may conclude that the optimal IDR cannot be quantified by comparing each action separately of each other when considering $\alpha(X)$ being constant functions only. Consequently such an IDR cannot control the individualized OCE. { So far we only consider single-stage individualized decision making problems. It is also meaningful to extend our proposed IDR-CDE to multi-stage decision-making scenarios in order to deliver time-varying optimal IDRs with risk exposure control. Since it will require advanced modeling and treatment, we leave such an extension for future research.} \section{The Empirical IDR Optimization Problem} \label{sec:empirical} \label{sec:alg} In this section, we discuss how to numerically solve the optimization problem \eqref{eq: goal} at the empirical level {without assuming any data generating mechanisms}. In the following, we focus on estimating the optimal IDR with ${\cal A} = \{-1, 1\}$, i.e., a binary action space. Further, for computational purposes, we restrict the decision rule to be given by: $d( X ) = \mbox{sign}(f( X;\theta ))$ for a parametric linear estimation function: $f( X;\theta ) = \beta^{\, T} X + \beta_0 = \theta^{\, T} \widehat{X}$, where $\theta \triangleq \left( \begin{array}{l} \beta \\ \beta_0 \end{array} \right) \in \mathbb{R}^{p+1}$ contains the unknown coefficients to be estimated and $\widehat{X} \triangleq \left( \begin{array}{l} X \\ 1 \end{array} \right)$. {Extensions to multi-action space and nonlinear decision rules are possible but will necessitate advanced modeling and treatment. This will be left for future research.} Using functional margin representation in standard classification, we then have ${\rm I\!I}\left( A = d( X ) \right) = {\rm I\!I}\left( A \, f(X;\theta) > 0 \right)$ for any nonzero $f(X;\theta)$. Therefore, the IDR-CDE optimiation problem can be equivalently written as: \begin{equation} \label{eq:complete EV-formulation} \displaystyle{ \operatornamewithlimits{\mbox{sup}}_{\theta \triangleq ( \beta,\beta_0 ) \in \mathbb{R}^{p+1}, \, \alpha \in {\cal F}} } \, \left\{ \begin{array}{l} {\rm I\!E}\left[ \, {\cal Z} \, \displaystyle{ \frac{{\rm I\!I}( A \, f(X;\theta) > 0 )}{\pi(A | X)} } \, \right] + \\ [0.2in] {\rm I\!E}\left[ \, \left[ \, \alpha(X) - {\cal Z} + u( {\cal Z} - \alpha(X) ) \, \right] \, \displaystyle{ \frac{{\rm I\!I}( A \, f(X;\theta) > 0 )}{\pi(A | X)} } \, \right] \end{array} \right\}. \end{equation} Before proceeding, we describe two characteristics of this problem that are important in the algorithmic development and provide our proposal to address them. \vspace{0.1in} \noindent {\bf (a) The discontinuity of the indicator function.} The function ${\rm I\!I}( A \, f(X;\theta) > 0 )$ is a lower semicontinuous, albeit discontinuous function. This seems to prohibit us from employing continuous optimization algorithms to solve problem \eqref{eq:complete EV-formulation}. A natural way to resolve this issue is to approximate the indicator function by a continuous function, such as the piecewise truncated hinge loss as in \cite{wu2007robust}: \[ T_{\delta}(x) \, \triangleq \, \displaystyle{ \frac{1}{2 \, \delta} } \, \underbrace{\left[ \, \max\left( \, x + \delta, 0 \, \right) - \max\left( \, x - \delta, 0 \, \right) \, \right]}_{\mbox{nonnegative}}\hspace{1pc} \mbox{for some $\delta > 0$}, \] so that \[ \begin{array}{lll} {\rm I\!I}\left( A \, f(X;\theta ) \, > \, 0 \right) & \approx & T_{\delta}(A \, f(X;\theta)) \\ [5pt] & = & \underbrace{\displaystyle{ \frac{1}{2 \, \delta} } \, \max\left( \, A \, f(X;\theta) + \delta, 0 \, \right)}_{\mbox{denoted $T_{\delta}^+(\theta;X,A)$}} - \underbrace{\displaystyle{ \frac{1}{2 \, \delta} } \, \max\left( \, A \, f(X;\theta) - \delta, 0 \, \right)}_{\mbox{denoted $T_{\delta}^-(\theta;X,A)$}}, \end{array} \] where both functions $T_{\delta}^{\pm}(\bullet;X,A)$ are nonnegative, convex, and piecewise affine; thus the approximating function is non-convex and non-differentiable, making the resulting optimization problem: \begin{equation} \label{eq:truncated hinge loss approx} \begin{array}{l} \displaystyle\operatornamewithlimits{sup}_{ \substack{\theta \triangleq ( \beta,\beta_0 ) \in \mathbb{R}^{p+1}, \\[0.04in] \alpha \in {\cal F}}} \left\{ \begin{array}{ll} {\rm I\!E}\left[ \, {\cal Z} \, \displaystyle{ \frac{T_{\delta}^+(\theta;X,A) - T_{\delta}^-(\theta;X,A)}{\pi(A | X)} } \, \right] + \\ [0.2in] \; {\rm I\!E}\left[ \, \left[ \, \alpha(X) - {\cal Z} + u( {\cal Z} - \alpha(X) ) \, \right] \, \displaystyle{ \frac{T_{\delta}^+(\theta;X,A) - T_{\delta}^-(\theta;X,A)}{\pi(A | X)} } \, \right]\end{array}\right\} \end{array} \end{equation} difficult to solve. Since we are interested in designing an algorithm that is provably convergent to a properly defined stationary solution, care is needed to handle the combined features of non-convexity and non-differentiability in the approximated problem (\ref{eq:truncated hinge loss approx}) and the discontinuity in \eqref{eq:complete EV-formulation}. These features are particularly relevant when we consider the convergence of the former to the latter as $\delta \downarrow 0$. To illustrate the difficulty with some algorithms for solving (\ref{eq:truncated hinge loss approx}), we mention that a majorization-minimization type algorithm \cite{Lange2016MM} may be too complex to implement as a majorizing function may be quite complicated; block coordinate descent type methods may not converge to a stationary point of this problem because the needed regularity assumptions \cite{tseng2001convergence} cannot be expected to be satisfied. Therefore, an alternative way to tackle the discontinuity of the indicator function is needed, which is the focus of Subsection~\ref{subsec:reformulation}. \vspace{0.1in} \noindent {\bf (b) The positive scale-invariance of the indicator function.} The function ${\rm I\!I}( A \, f(X;\theta) > 0 )$ is positively scale-invariant as any positive scaling of $f(X;\theta)$ will not change the objective value of the problem \eqref{eq:complete EV-formulation}. This could cause computational instability, and more seriously, incorrect definition of the indicator function due to round-off errors; these numerical issues become more pronounced when $f(X;\theta)$ is close to 0 in practical implementation of an algorithm. One way to guard against such undesirable characteristics of the indicator function is to solve two optimization problems with the bias term $\beta_0$ set equal to $\pm 1$, respectively, and accept as the solution the one with a smaller objective value. In the development below, this safe guard is adopted as can be seen in the formulation (\ref{eq:OCE binaryoptimization problem}). \subsection{Difference-of-convex reformulation of (\ref{eq:complete EV-formulation})} \label{subsec:reformulation} In this subsection, we propose a method to transform the discontinuous optimization problem \eqref{eq:complete EV-formulation} that involves the indicator function to a continuous optimization problem by means of a mild assumption. Our approach is to reformulate the discontinuous problem \eqref{eq:complete EV-formulation} via its epigraphical representation. Since ${\rm I\!I}( \, \bullet\, > 0 )$ is a lower semicontinuous function, its epigraph \[ {\rm epi}\,{\rm I\!I}( \, \bullet\, > 0 ) \,\triangleq \, \left\{\,(t, s)\in\mathbb{R}\times \mathbb{R}\mid t\geq {\rm I\!I}( s > 0 )\,\right\} \] is a closed set \cite[Theorem 7.1]{rockafellar1970convex}. However, the random variable $\mathcal{Z}$ may attain positive values, which makes it also essential to consider the hypograph of ${\rm I\!I}(\bullet > 0 )$, i.e., the set \[ {\rm hypo}\,{\rm I\!I}( \, \bullet\, > 0 ) \,\triangleq \, \left\{\,(t, s)\in\mathbb{R}\times \mathbb{R}\mid t\leq {\rm I\!I}( s > 0 )\,\right\}. \] Since the indicator function is not upper semicontinuous, the above set is not closed. We thus consider an approximation of ${\rm I\!I}(\bullet > 0 )$ by an upper semicontinuous function ${\rm I\!I}(\bullet \geq 0 )$ that has a closed hypograph \[ {\rm hypo}\,{\rm I\!I}( \, \bullet\, \geq 0 ) \,\triangleq \, \left\{\,(t, s)\in\mathbb{R}\times \mathbb{R}\mid t\leq {\rm I\!I}( s \geq 0 )\,\right\}. \] Interestingly, the sets ${\rm epi}\,{\rm I\!I}( \, \bullet\, > 0 )$ and ${\rm hypo}\,{\rm I\!I}( \, \bullet\,\geq 0 )$ are each a finite union of polyhedra that admits an extremely simple dc representation given in the next lemma. See also Figures~\ref{fig:epi} and \ref{fig:hypo} for illustration. No proof is required for the lemma. \begin{lemma}\label{lemma:epi representation} For any $t,s\in \mathbb{R}$, the following two statements hold: (i) $(t,s)\in {\rm epi}\,{\rm I\!I}( \bullet > 0 )$ if and only if $\max(-t, s) - \max(t+s-1, 0)\leq 0$\,; (ii) $(t,s)\in {\rm hypo}\,{\rm I\!I}( \bullet \geq 0 )$ if and only if $\max(t+s-1, 0) - \max(-t, s) \leq 0$. \hfill $\Box$ \end{lemma} \begin{figure}[h] \centering \fbox{ \begin{minipage}{.4\textwidth} \begin{center} \begin{tikzpicture}[scale = 0.8] \draw[gray!40, thin, step=0.5] (-2.5,-2) grid (2.5,2.5); \draw[thick,->] (-2.7,0) -- (2.7,0) node[right] {$s$}; \draw[thick,->] (0,-2.2) -- (0,2.8) node[above] {$t$}; \foreach \x in {-2,...,-1} \draw (\x,0.05) -- (\x,-0.05) node[below] {\tiny\x}; \foreach \x in {1,...,2} \draw (\x,0.05) -- (\x,-0.05) node[below] {\tiny\x}; \foreach \y in {-2,...,2} \draw (-0.05,\y) -- (0.05,\y) node[right] {\tiny\y}; \draw[red, line width=0.5mm] (-2.5,0) -- (0,0); \draw[red, line width=0.5mm] (0.07,1) -- (2.5,1); \draw [red,thick, fill = white] (0,1) circle (0.7mm); \draw [red,fill=red] (0,0) circle (0.7mm); \fill[opacity=0.6,pattern=north east lines] (0,1) -- (2.5,1) -- (2.5,2.5) -- (0,2.5) -- cycle; \fill[opacity=0.6,pattern=north east lines] (-2.5,0) -- (0,0) -- (0,2.5) -- (-2.5,2.5) -- cycle; \end{tikzpicture} \caption{\small the region (shaded) for ${\rm epi}\,{\rm I\!I}( \bullet > 0 )$} \label{fig:epi} \end{center} \end{minipage} \quad \begin{minipage}{.4\textwidth} \begin{center} \begin{tikzpicture}[scale = 0.8] \draw[gray!40, thin, step=0.5] (-2.5,-2) grid (2.5,2.5); \draw[thick] (-2.7,0) -- (-0.07,0); \draw[thick,->] (0.07,0) -- (2.7,0) node[right] {$s$}; \draw[thick] (0,-2.2) -- (0,-0.07); \draw[thick,->] (0,0.07) -- (0,2.8) node[above] {$t$}; \foreach \x in {-2,...,-1} \draw (\x,0.05) -- (\x,-0.05) node[below] {\tiny\x}; \foreach \x in {1,...,2} \draw (\x,0.05) -- (\x,-0.05) node[below] {\tiny\x}; \foreach \y in {-2,...,2} \draw (-0.05,\y) -- (0.05,\y) node[right] {\tiny\y}; \draw[red, line width=0.5mm] (-2.5,0) -- (-0.07,0); \draw[red, line width=0.5mm] (0,1) -- (2.5,1); \draw [red,fill=red] (0,1) circle (0.7mm); \draw [red,thick,fill = white] (0,0) circle (0.7mm); \fill[pattern=north east lines,opacity=0.6] (0,1) -- (2.5,1) -- (2.5,-2) -- (0,-2) -- cycle; \fill[pattern=north east lines,opacity=0.6] (-2.5,0) -- (0,0) -- (0,-2) -- (-2.5,-2) -- cycle; \end{tikzpicture} \caption{\small the region (shaded) for ${\rm hypo}\,{\rm I\!I}( \bullet \geq 0 )$} \label{fig:hypo} \end{center} \end{minipage} } \end{figure} Denoting ${\cal Z}^- \triangleq \max(-{\cal Z}, 0)$ and ${\cal Z}^+ \triangleq \max({\cal Z}, 0)$, we assume that \[ {\rm I\!E}\left[ \, {\cal Z}^+ \, \displaystyle{ \frac{{\rm I\!I}( A \, f(X;\theta) = 0 )}{\pi(A | X)} } \, \right] = 0. \] Under this assumption, problem \eqref{eq:complete EV-formulation} is equivalent to \begin{equation} \label{eq:complete EV-formulation 2} \displaystyle{ \operatornamewithlimits{\mbox{minimize}}_{ \beta \in \mathbb{R}^{p}, \alpha \in {\cal F}} } \, \left\{ \begin{array}{l} {\rm I\!E}\left[ \, {\cal Z}^- \, \displaystyle{ \frac{{\rm I\!I}( A \, f(X;\theta) > 0 )}{\pi(A | X)} } \, \right] - {\rm I\!E}\left[ \, {\cal Z}^+ \, \displaystyle{ \frac{{\rm I\!I}( A \, f(X;\theta) \geq 0 )}{\pi(A | X)} } \, \right]\\ [0.2in] \displaystyle{ } \, +{\rm I\!E}\left[ \, \left( \, \underbrace{{\cal Z} - \alpha(X) - u( {\cal Z} - \alpha(X))}_{\mbox{nonnegative}} \, \right) \, \displaystyle{ \frac{{\rm I\!I}(A \, f(X;\theta) > 0 )}{\pi(A | X)} } \, \right] \end{array} \right\}. \end{equation} For further consideration, we take $\alpha(X)$ to be a parameterized family of affine functions $\{ b^{\, T} X + \beta_0 = w^{\, T} \widehat{X} \}$ where $w \triangleq \left( \begin{array}{c} b \\ b_0 \end{array} \right)$ is the parameter is be estimated. {The use of affine functions to approximate $\alpha^\ast(X)$ is based on both modeling and computational perspectives. The affine functions are easy for interpretation, but may suffer from model misspecification. The linear assumption can be relaxed by using kernel trick in machine learning. The corresponding computation will be more involved.} We approximate the expectation in \eqref{eq:complete EV-formulation 2} by the sample average that is based on the available data $\{ ( X^{\,i},A_i,{\cal Z}_{\, i} ) \}_{i=1}^N$. In order to compute a sparse solution that can avoid model overfitting, we add sparsity surrogate functions \cite{ahn2017difference} $P_b$ and $P_{\theta}$ on the parameters $w$ and $\beta$ in the covariate function $\alpha(X)$ and the function $f(X;\theta)$, respectively, each weighted by the positive scalars $\lambda_b^N$ and $\lambda_{\beta}^N$. The empirical problem is then given by {\small \begin{equation} \label{eq:OCE binaryoptimization problem} \begin{array}{l} \displaystyle\operatornamewithlimits{minimize}_{\substack{\beta \in \mathbb{R}^{p}\\ w \triangleq ( b,b_0 ) \in S}} \left\{ \begin{array}{l} \lambda_a^N \, P_b(b) + \lambda_{\beta}^N \, P_{\beta}(\beta) + \displaystyle{ \frac{1}{N} } \, \displaystyle{ \sum_{i=1}^N } \, {\cal Z}^-_i \, \displaystyle{ \frac{{\rm I\!I}( A_{\,i} \, (\beta^{\, T} X^{\,i} \pm 1) >0 )}{\pi(A_{\,i} \,|\, X^{\,i})} } -\\[0.2in] \, \displaystyle{ \frac{1}{|{\cal N}_+|} } \,\displaystyle{ \sum_{i \in {\cal N}_+} } \, {\cal Z}^+_i \, \displaystyle{ \frac{{\rm I\!I}( A_{\,i} \, (\beta^{\, T} X^{\,i} \pm 1) \geq 0 )}{\pi(A_{\,i} \,|\, X^{\,i})} } +\\ [0.2in] \displaystyle{ \frac{1}{N} } \, \displaystyle{ \sum_{i=1}^N } \, \left[ \, {\cal Z}_i - w^{\, T} \widehat{X}^{\, i} - u( {\cal Z}_i - w^{\, T} \widehat{X}^{\, i} ) \, \right] \, \displaystyle{ \frac{{\rm I\!I}( A_{\,i} \, (\beta^{\, T} X^{\,i} \pm 1) > 0 )}{\pi(A_{\,i} \,|\, X^{\,i})} } \end{array} \right\}, \end{array} \end{equation}} where ${\cal N}_+ \,\triangleq\, \{\,1\leq j\leq N \ | \ {\cal Z}_j > 0 \,\}$ and $S$ is a closed convex set. [In principle, we may add constraints to the parameter $\beta$ also but refrain from doing this as it does not add value to the methodology.] Based on Lemma \ref{lemma:epi representation}, the above problem can be further written as {\small \begin{equation} \label{eq:OCE optimization problem 3} \begin{array}{l} \mbox{minimize over $z \, \triangleq \, ( w,\beta,\sigma^\pm )$; \, $\beta \, \in \, \mathbb{R}^{p}$, and $w \, \triangleq \, ( b,b_0 ) \, \in \, S$} \\ [0.1in] \varphi(z) \, \triangleq \, \left\{ \begin{array}{l} \lambda_a^N \, P_b(b) + \lambda_{\beta}^N \, P_{\beta}(\beta) + \displaystyle{ \frac{1}{N} } \, \displaystyle{ \sum_{i=1}^N } \, \displaystyle{ \frac{{\cal Z}^-_i \sigma^-_i}{\pi(A_{\,i} \,|\, X^{\,i})} } - \displaystyle{ \frac{1}{|{\cal N}_+|} } \displaystyle{ \sum_{j \in {\cal N}_+} } \, \displaystyle{ \frac{{\cal Z}^+_j \sigma^+_j}{\pi(A_{\,j} \,|\, X^{\,j})} } \\ [0.2in] \displaystyle{ \frac{1}{N} } \, \displaystyle{ \sum_{i=1}^N } \, \underbrace{\left[ \, {\cal Z}_i - w^{\, T} \widehat{X}^{\, i} - u( {\cal Z}_i - w^{\, T} \widehat{X}^{\, i} ) \, \right] \, \displaystyle{ \frac{\sigma_i^-}{\pi(A_{\,i} \,|\, X^{\,i})} }}_{\mbox{nonconvex}} \end{array} \right\}\\ [0.5in] \mbox{subject to} \\ [5pt] \max(-\sigma_i^-\, , \,A_{\,i} \, (\beta^{\, T} X^{\,i} \pm 1)) - \max(\sigma_i^- + \, A_{\,i} (\beta^{\, T} X^{\,i} \pm 1) - 1,\, 0) \, \leq \, 0, \; 1 \leq i \leq N \\ [0.1in] \max(\sigma_{\,j} ^+ + A_{\,j} \, (\beta^{\, T} X^j \pm 1)-1) - \max(-\sigma_{\,j} ^+\,,\, A_{\,j} \, (\beta^{\, T} X^j \pm 1)) \leq 0, \hspace{1pc} j \in {\cal N}_+, \end{array} \end{equation} } where the constraints are of the difference-of-convex, piecewise affine type. Denote $t_{\,i} \triangleq {\cal Z}_i - w^{\, T} \widehat{X}^{\, i}$ for any $i = 1,\cdots, N$. The last term in the objective function $\varphi$ can be further written as \[ \begin{array}{ll} & \left[ \, t_{\,i} - u(t_{\,i}) \, \right] \, \displaystyle{ \frac{\sigma_i^-}{\pi(A_{\,i} \,|\, X^{\,i})} } \\[0.1in] = & \displaystyle{ \frac{1}{2 \, \pi(A_{\,i} \,|\, X^{\,i})} } \, \left\{ \, \left[ \, t_{\,i} - u(t_{\,i}) + \sigma_i^- \, \right]^2 - (\sigma_i^{-})^2 - [\,t_{\,i} - u(t_{\,i})\,]^2 \, \right\}. \end{array} \] Since $t_{\,i} - u(t_{\,i})\geq 0$ and $\sigma_{i}^-\geq 0$, the terms $\left[ \, t_{\,i} - u(t_{\,i}) + \sigma_i^- \, \right]^2$ and $[\,t_{\,i} - u(t_{\,i})\,]^2$ are convex. Hence each product $\left[ \, t_{\,i} - u(t_{\,i}) \, \right] \, \displaystyle{ \frac{\sigma_i^-}{\pi(A_{\,i} \,|\, X^{\,i})} }$ is the difference of convex functions. \vspace{0.1in} Suppose that the utility function and sparsity surrogate functions are as follows: \begin{equation}\label{eq:utility and sparsity} \begin{array}{rll} u(t) & = & \xi_1 \, \max( 0,t ) - \xi_2 \, \max( 0,-t ), \hspace{1pc} \mbox{where}\; 0 \, \leq \, \xi_1 \, < \, 1 \, < \, \xi_2\,; \\[0.1in] P_b(b)& = & \displaystyle{ \sum_{i=1}^p } \, \left[ \, \phi_i^b | \, b_{\,i} \, | - \rho_i^b(b_i) \, \right] , \hspace{1pc} \phi_i^b \, > \, 0, \ i \, = \, 1, \cdots, p\,; \\ [0.2in] P_\beta(\beta) & = & \displaystyle{ \sum_{i=1}^p } \, \left[ \, \phi_i^\beta | \, \beta_{\,i} \, | - \rho_i^\beta(\beta_i) \, \right] , \hspace{1pc} \phi_i^\beta \, > \, 0, \ i \, = \, 1, \cdots, p, \end{array} \end{equation} where $\phi_i^b$ and $\phi_i^{\beta}$ are given constants and $\rho_i^b$ and $\rho_i^{\beta}$ are convex differentiable functions \cite{ahn2017difference}. We then have \[ \begin{aligned} \left[ \, t_{\,i} - u(t_{\,i}) \, \right] \, \displaystyle{ \sigma_i^- } \, =\, \displaystyle \frac{1}{2} \, \bigg\{\,\underbrace{(1-\xi_1)\left[ \, \max(0,t_i) + \sigma_i^- \, \right]^2 + (1+\xi_2)\left[ \, \max(0,-t_i) + \sigma_i^- \, \right]^2}_{\mbox{convex}} \\[0.05in] \ - \underbrace{\left[\,(2-\xi_1+\xi_2)(\sigma_i^-)^2 - (1-\xi_1)\left[\,\max(0,t_i)\,\right]^2 - (1+\xi_2)\left[\,\max(0,-t_i)\,\right]^2\,\right]}_{\mbox{convex and continuously differentiable}}\,\bigg\}. \end{aligned} \] Therefore, under the above setting, the objective function $\varphi$ is the difference of two convex functions, $\varphi_1 - \varphi_2$, with $\varphi_2$ being continuously differentiable. In the next section, we present a dc algorithm for solving such a problem. \section{Solving a Piecewise Affine Constrained DC Program}\label{sec:dca} We consider problem \eqref{eq:OCE optimization problem 3} cast in the following general form: \begin{equation}\label{eq: dc1} \begin{array}{ll} \displaystyle\operatornamewithlimits{minimize}_{x\in X} \hspace{1pc} f(x) \, - \, g(x)\\[0.15in] \mbox{subject to} \\[0.1in] \hspace{1pc} \displaystyle\max_{1\, \leq \, j \, \leq \, J_{1i}}((a^{\,ij})^Tx + \alpha_{\,ij}) \, - \, \max_{1 \, \leq \,j \, \leq \,J_{2i}}((b^{\,ij})^Tx + \beta_{\,ij})\, \leq \, 0, \hspace{1pc} i = 1, \ldots, m, \end{array} \end{equation} where $f:\mathbb{R}^n \to \mathbb{R}$ is a convex function, $g:\mathbb{R}^n\to \mathbb{R}$ is a continuously differentiable convex function with Lipschitz continuous gradient, each $a^{\,ij}$ and $b^{\,ij}$ are $n$-dimensional vectors, each $\alpha_{\,ij}$ and $\beta_{\,ij}$ are scalars, each $J_{1i}$ and $J_{2i}$ are positive integers, and $X$ is a polyhedral set. Notice that for any $i = 1, \ldots, m$, it holds that \[\begin{array}{ll} & \displaystyle\max_{1 \, \leq \, j \, \leq \, J_{1i}}((a^{\,ij})^Tx + \alpha_{\,ij}) \, - \, \max_{1\, \leq \,j \, \leq \, J_{2i}}((b^{\,ij})^Tx + \beta_{\,ij})\, \leq \, 0 \\[0.2in] \Longleftrightarrow & (a^{\,ij_1})^Tx + \alpha_{\,ij_1} - \displaystyle\max_{1\leq j \leq J_{2i}}((b^{\,ij})^Tx + \beta_{\,ij})\, \leq \, 0, \hspace{1pc} \forall\; 1\leq j_1 \leq J_{1i} \\[0.2in] \Longleftrightarrow & \displaystyle\max_{1\leq j_2 \leq J_{2i}}\left(\,(b^{\,ij_2} - a^{\,ij_1})^Tx + (\beta_{\,ij_2}- \alpha_{\,ij_1})\,\right)\, \geq \, 0, \hspace{1pc} \forall\; 1\leq j_1 \leq J_{1i}. \end{array} \] The above equivalences indicate that by properly redefining $(b^{\,ij}, \beta_{\,ij})$ and the value of $m$, one can write any piecewise linear constrained dc program \eqref{eq: dc1} as the following reverse convex constrained \cite{hillestad1980reverse} dc program: \begin{equation}\label{eq: dc2}\begin{array}{ll} \displaystyle\operatornamewithlimits{minimize}_{x\in X} \hspace{1pc} & h(x)\,\triangleq \, f(x) \, - \, g(x)\\[0.15in] \mbox{subject to} \hspace{1pc} & \, \displaystyle\max_{1\, \leq \, j \, \leq \,J_{\,i}}(\,(\,b^{\,ij}\,)^Tx + \beta_{\,ij}\,)\, \geq \, 0, \hspace{1pc} i = 1, \ldots, m. \end{array} \end{equation} Denote the feasible set of the problem \eqref{eq: dc2} as \[ F\,\triangleq \,\left\{x\in X\mid \displaystyle{ \max_{1\, \leq \, j \, \leq \,J_{\,i}} } \, (\,(\,b^{\,ij}\,)^Tx + \beta_{\,ij}\,) \, \geq \, 0, \hspace{1pc} i = 1, \ldots, m\,\right\}. \] For any $x\in \mathbb{R}^n$, we also denote \[ {\cal I}(x) \,\triangleq\, \left\{\,1\leq i\leq m\mid \, \displaystyle\max_{1\leq j \leq J_{\,i}}(\,(\,b^{\,ij}\,)^Tx + \beta_{\,ij}\,)\, =\, 0\,\right\} \] and \[ \mathcal{A}_{\,i}(x) \,\triangleq\, \displaystyle{ \operatornamewithlimits{argmax}_{1\, \leq \, j \, \leq \, J_{\,i}} } \, \left\{\,(\,b^{\,ij}\,)^Tx + \beta_{\,ij}\,\right\}, \hspace{1pc} i = 1, \ldots, m. \] We say that $\bar{x}\in X$ is a B(ouligand)-stationary point \cite{pang2007partially} of the problem \eqref{eq: dc2} if \[ h^{\prime}(\bar{x};d)\,\triangleq\, \operatornamewithlimits{lim}_{\tau\downarrow 0 } \, \displaystyle{ \frac{h(\bar{x} + \tau d) - h(\bar{x})}{\tau} } = f^{\prime}(\bar{x};d) - g^{\prime}(\bar{x};d)\,\geq\, 0, \hspace{1pc} \forall\; d\in \mathcal{T}_{\,B\,}(\bar{x};F), \] where $\mathcal{T}_{\,B\,}(\bar{x};F)$ is the Bouligand tangent cone of $F$ at $\bar{x}\in F$, i.e., (see, e.g.,~\cite[Proposition~3]{pang2016computing}), \[\begin{array}{rl} \mathcal{T}_{\,B\,}(\bar{x};F) \, \triangleq & \left\{\,d\in \mathbb{R}^n\, \mid \, d \, = \, \displaystyle{ \lim_{\nu\to\infty} } \, \displaystyle{ \frac{(x^{\,\nu} - \bar{x} )}{\tau_{\nu}}, } \ \mbox{where $F\ni x^{\,\nu}\to \bar{x}$ and $\tau_{\nu}\downarrow 0$} \right\}\\[0.2in] = & \left\{\,d\in \mathcal{T}_{\,B\,}(\bar{x};X) \, \mid \, \displaystyle\max_{j \in \mathcal{A}_{\,i}(\bar{x})}\, (b^{\,ij})^{\,T}d\geq \,0, \;\forall\; i \in {\cal I}(\bar{x})\, \right\} \\ [0.2in] = & \displaystyle{ \bigcap_{i \in {\cal I}(\bar{x})} } \, \displaystyle{ \bigcup_{j \in {\cal A}_i(\bar{x})} } \, \left\{\,d\in \mathcal{T}_{\,B\,}(\bar{x};X) \, \mid \, (b^{\,ij})^{\,T}d \, \geq \,0 \, \right\}. \end{array} \] [Since $X$ is assumed to be polyhedral, $\mathcal{T}_{\,B\,}(\bar{x};X)$ is a polyhedral cone.] A weaker concept than B-stationarity is that of weak B-stationarity, which pertains to a feasible solution $\bar{x}\in F$ such that $h^{\prime}(\bar{x};d)\geq 0$ for any $d\in \mathbb{R}^n$ satisfying \[ \begin{array}{rl} d\in \mathcal{T}^{\,\rm weak}_{\,B\,}(\bar{x};F) \,\triangleq & \left\{ \, d\in \mathcal{T}_{\,B\,}(\bar{x};X) \, \mid \, \displaystyle\min_{j \in \mathcal{A}_{\,i}(\bar{x})}\, (b^{\,ij})^{\,T}d\geq \,0, \;\forall\; i \in {\cal I}(\bar{x})\, \right\} \\ [0.2in] = & \displaystyle{ \bigcap_{i \in {\cal I}(\bar{x})} } \, \displaystyle{ \bigcap_{j \in {\cal A}_i(\bar{x})} } \, \left\{\,d\in \mathcal{T}_{\,B\,}(\bar{x};X) \, \mid \, (b^{\,ij})^{\,T}d \, \geq \,0 \, \right\}. \end{array} \] Unlike $\mathcal{T}_{\,B\,}(\bar{x};F)$, which is not necessarily convex, $\mathcal{T}_{\,B\,}^{\, \rm weak}(\bar{x};F)$ is a polyhedral cone. It is known from \cite[Chapter 2, Proposition 1.1(c) \& Exercise 9.10]{clarke1998nonsmooth} that \[ \mathcal{T}_{C}(\bar{x}; F)\,\subseteq\, \mathcal{T}_{B}^{\,\rm weak}(\bar{x};F)\,\subseteq \,\mathcal{T}_{B}(\bar{x};F), \] where $\mathcal{T}_{\,C\,}(\bar{x}; F)$ denotes the Clarke tangent cone of $F\subseteq\mathbb{R}^n$ at $\bar{x}$, i.e., $d\in \mathcal{T}_{\,C\,}(\bar{x}; F)$ if for every sequence $\{x^{\,i}\}\subseteq S$ converging to $\bar{x}$ and positive scalar sequence $\{t_{\,i}\}$ decreasing to $0$, there exists a sequence $\{d^{\,i}\}\subseteq \mathbb{R}^n$ converging to $d$ such that $x^{\,i} + t_{\,i}\,d^{\,i} \in F$ for all $i$ \cite[Chapter 2, Proposition 5.2]{clarke1998nonsmooth}. In order to better understand the above two stationarity concepts in the context of the piecewise polyhedral structure of the feasible set $F$ and to motivate the algorithm to be presented afterward for solving the problem \eqref{eq: dc2}, we first introduce a further stationarity concept, which we call A-stationarity (A for Algorithm). Specifically, we note that $F$ is the union of finitely many polyhedra: \[ F \, = \, \displaystyle{ \bigcup_{(j_1, \cdots, j_m)} } \, \left\{ \, x \, \in \, X \, \mid \, (\, b^{\,ij_i} \,)^Tx + \beta_{ij_i} \, \geq \, 0, \hspace{1pc} i = 1, \ldots, m\,\right\}, \] where the union ranges over all tuples $\{ j_i \}_{i=1}^m$ with each $j_i \in \{ 1, \cdots, J_i \}$ for all $i$. Given a vector $\bar{x} \in F$, let ${\cal J}(\bar{x})$ be the family of such tuples such that $j_i \in {\cal A}_i(\bar{x})$ for all $i = 1, \cdots, m$. We say that $\bar{x} \in F$ is {\sl A-stationary} if there exists a tuple $\bar{j}(\bar{x}) = \{ \, \bar{j}_i \, \}_{i=1}^m \in {\cal J}(\bar{x})$ such that \[ h^{\, \prime}(\bar{x};d) \, \geq \, 0, \ \forall \, d \, \in {\cal T}_A^{\bar{j}(\bar{x})}(\bar{x};F) \, \triangleq \, \left\{ \, d \, \in \, {\cal T}_{\,B\,}(\bar{x};X) \, \mid \, ( \, b^{i \bar{j}_i} \, )^T d \, \geq \, 0, \ \forall \, i \, \in \, {\cal I}(\bar{x}) \, \right\}. \] \begin{lemma}\label{lemma:weak B} Let $\bar{x}\in F$ be given. Consider the following statements all pertaining to the problem \eqref{eq: dc2}:\\[0.05in] (a) $\bar{x}$ is B-stationary;\\[0.05in] (b) $\bar{x}$ is A-stationary;\\[0.05in] (c) there exists a tuple $\bar{j}(\bar{x}) = \{ \, \bar{j}_i \, \}_{i=1}^m \in {\cal J}(\bar{x})$ such that \begin{equation} \label{eq:A-stationary in terms of min} \bar{x} \, \in \, \displaystyle{ \operatornamewithlimits{\mbox{argmin}}_{x\in X} } \, \left\{ \, f(x) - [\,g(\bar{x}) + \nabla g(\bar{x})^T(x-\bar{x})\,]\,\mid \, (b^{\,i\,\bar{j}_{\,i}})^Tx + \beta_{\,i\,\bar{j}_{\,i}}\geq 0, \; i \in {\cal I}(\bar{x}) \, \right\}; \end{equation} (d) there exists a tuple $\bar{j}(\bar{x}) = \{ \, \bar{j}_i \, \}_{i=1}^m \in {\cal J}(\bar{x})$ such that \[ \bar{x} \in \, \displaystyle{ \operatornamewithlimits{\mbox{argmin}}_{x\in X} } \left\{ \, f(x) - [\,g(\bar{x}) + \nabla g(\bar{x})^T(x-\bar{x})\,]\,\mid \, (b^{\,i\,\bar{j}_{\,i}})^Tx + \beta_{\,i\,\bar{j}_{\,i}}\geq 0, \; i = 1, \cdots, m \, \right\}; \] (e) $\bar{x}$ is weak B-stationary.\\ It holds that (a) $\Rightarrow$ (b) $\Leftrightarrow$ (c) $\Leftrightarrow$ (d) $\Rightarrow$ (e). \end{lemma} \begin{proof} (a) $\Rightarrow$ (b). This is because ${\cal T}_A^{\bar{j}(\bar{x})}(\bar{x};F) \subseteq {\cal T}_B(\bar{x};F).$ \vspace{0.1in} (b) $\Rightarrow$ (e). This is because $\mathcal{T}^{\,\rm weak}_{\,B\,}(\bar{x};F) \subseteq {\cal T}_A^{\bar{j}(\bar{x})}(\bar{x};F)$. \vspace{0.1in} (b) $\Leftrightarrow$ (c). This is clear because the condition $h^{\, \prime}(\bar{x};d) \geq 0$ for all $d \in {\cal T}_A^{\bar{j}(\bar{x})}(\bar{x};F)$ is exactly the first-order optimality condition of the convex program in (\ref{eq:A-stationary in terms of min}). \vspace{0.1in} (c) $\Rightarrow$ (d). This is clear because there are more constraints in the feasible region of the optimization problem in (d) than those in (c). \vspace{0.1in} (d) $\Rightarrow$ (c). Let $x \in X$ satisfy $(b^{\,i\,\bar{j}_{\,i}})^Tx + \beta_{\,i\,\bar{j}_{\,i}}\geq 0$ for all $i \in {\cal I}(\bar{x})$. Since $(b^{\,i\,\bar{j}_{\,i}})^Tx + \beta_{\,i\,\bar{j}_{\,i}} > 0$ for all $i \not\in {\cal I}(\bar{x})$, it follows that for all $\tau > 0$ sufficiently small, the vector $x^{\tau} \triangleq x + \tau ( \bar{x} - x )$ satisfies $(b^{\,i\,\bar{j}_{\,i}})^Tx^{\tau} + \beta_{\,i\,\bar{j}_{\,i}} \geq 0$ for all $i = 1, \cdots, m$. Hence, \[ \begin{array}{lll} f(\bar{x}) - g(\bar{x}) & \leq & f(x^{\tau}) - \left[\,g(\bar{x}) + \nabla g(\bar{x})^T(x^{\tau} - \bar{x})\, \right] \hspace{1pc} \mbox{by (d)} \\ [0.1in] & \leq & \tau \, \left[ \, f(\bar{x}) - g(\bar{x}) \, \right] + ( \, 1 - \tau \, ) \, \left[ \, f(x) - \left[ \,g(\bar{x}) + \nabla g(\bar{x})^T( \, x - \bar{x} \, ) \, \right] \,\right], \end{array} \] which yields \[ f(\bar{x}) - g(\bar{x}) \, \leq \, f(x) - \left[ \,g(\bar{x}) + \nabla g(\bar{x})^T( \, x - \bar{x} \, ) \, \right], \] establishing (c). \end{proof} \vspace{0.1in} In the following, we propose a dc algorithm to compute an A-stationary point of \eqref{eq: dc2}. The algorithm takes advantage of the reverse convex constraints of the problem in that once initiated at a feasible vector $x^0 \in F$, the algorithm generates a feasible sequence $\{ x^{\nu} \} \subset F$; see Step~1 below. \noindent\makebox[\linewidth]{\rule{\textwidth}{1pt}} \noindent A dc algorithm for solving the reverse convex constrained dc program \eqref{eq: dc2}. \noindent\makebox[\linewidth]{\rule{\textwidth}{1pt}} \noindent {\bf Initialization.} Given are a scalar $c > 0$, an initial point $x^{\, 0} \in F$. \vspace{0.1in} {\bf Step 1.} For each $i = 1, \cdots, m$, choose an index $j_{\,i}^{\,\nu}\in \mathcal{A}_{\,i\,}(x^{\,\nu})$. Let $x^{\,\nu+1}$ be the unique optimal solution of the convex program: \begin{equation}\label{eq:dc sub} \begin{array}{ll} \displaystyle{ \operatornamewithlimits{\mbox{minimize}}_{x \in X} } & \widehat{h}_{\,c\,}(x;x^{\,\nu}) \, \triangleq \, f(x) - [ \, g(x^{\,\nu}) + (\nabla g(x^{\,\nu}))^{\,T}(x-x^{\,\nu})\,] \\[0.1in] &\hspace{1pc} \hspace{1pc} \quad \quad \quad \quad +\underbrace{\displaystyle\frac{c}{2}\,\|x - x^{\,\nu}\|^2}_{\mbox{\small proximal regularization}} \\[0.1in] \mbox{subject to} & (b^{\,i\,j_{\,i}^{\,\nu}})^{\,T} x+\beta_{\,i\,j_{\,i}^{\,\nu}} \, \geq \, 0,\hspace{1pc} i = 1,\ldots, m. \end{array} \end{equation} {\bf Step 2.} If $x^{\, \nu+1}$ satisfies a prescribed stopping rule, terminate; otherwise, return to Step 1 with $\nu$ replaced by $\nu+1$. \hfill $\Box$ \noindent\makebox[\linewidth]{\rule{\textwidth}{1pt}} An enhanced version of the above algorithm that requires solving multiple subproblems for all indices $j_{\,i}^{\,\nu}$ in a so-called ``$\varepsilon$-argmax set'' has been suggested in \cite{pang2016computing}. For this enhanced algorithm, it can be shown that every accumulation point, if exists, of the generated sequence is a B-stationary point. Although there are theoretical benefits of such an algorithm, it may not be efficient when applied to the empirical CDE problem \eqref{eq:OCE optimization problem 3}, because the number of reverse convex inequalities in the constraint set is proportional to the number of samples, making the ``$\varepsilon$-argmax set'' potentially very large, thus potentially many subprograms need to be solved at every iteration. There is also a probabilistic variant of the enhanced algorithm that also solves only one convex subprogram of the same type as (\ref{eq:dc sub}). The only difference from the presented deterministic algorithm is that the tuple $\{ \, \bar{j}_i^{\nu} \, \}_{i=1}^m$ is chosen from the $\varepsilon$-argmax sets randomly with positive probabilities. Almost sure convergence of the probabilistic algorithm to a B-stationary point can be established. Since the above (deterministic) algorithm has not been formally introduced in the literature, we provide below a (subsequential) convergence result to an A-stationary solution of the problem \eqref{eq: dc2}. It is worth mentioning that each $x^{\nu+1}$ is feasible to the subprogram (\ref{eq:dc sub}) at iteration $\nu+1$ because \[ (b^{\,i\,j_{\,i}^{\,\nu+1}})^{\,T} x^{\nu+1} +\beta_{\,i\,j_{\,i}^{\,\nu+1}} \, = \, \displaystyle{ \max_{1 \leq j \leq J_i} } \, (\,(\,b^{\,ij}\,)^Tx^{\nu+1} + \beta_{\,ij}\,) \, \geq \, (b^{\,i\,j_{\,i}^{\,\nu}})^{\,T} x^{\nu+1} +\beta_{\,i\,j_{\,i}^{\,\nu}} \geq 0. \] This inequality also shows that $x^{\nu+1} \in F$ for all $\nu$. The following theorem asserts the subsequential convergence of the sequence generated by the above dc algorithm to an A-stationary point of problem \eqref{eq: dc2}. \begin{theorem}\label{thm:subsequential convergence} Suppose that $h$ is bounded below on the polyhedral set $X$. Then any accumulation point $x^{\,\infty}$ of the sequence $\left\{ x^{\, \nu} \right\}$ generated by the dc algorithm, if it exists, is an A-stationary point of \eqref{eq: dc2}. \end{theorem} \begin{proof} The sequence of function values $\{h(x^{\,\nu})\}$ decreases since \[\begin{array}{rl} & h(x^{\,\nu+1}) + \displaystyle\frac{c}{2}\|x^{\,\nu+1} - x^{\,\nu}\|^2 \\[0.1in] \leq & \widehat{h}_{\,c\,}(x^{\,\nu+1}; x^{\,\nu}) \hspace{1pc} \mbox{(by the convexity of $g$)}\\[0.1in] \leq & h(x^{\,\nu}) \ \mbox{(by the optimality of $x^{\,\nu+1}$ and the feasibility of $x^{\,\nu}$ to \eqref{eq:dc sub})}. \end{array} \] Since $h$ is bounded below on $X$, we may derive that $\displaystyle\lim_{\nu\to \infty}\|x^{\,\nu+1} - x^{\,\nu}\| = 0$. By the definition of the point $x^{\,\nu+1}$, we obtain that for all $x\in X$ satisfying $(b^{\,i\,j_{\,i}^{\,\nu}})^Tx + \beta_{\,i\,j_{\,i}^{\,\nu}}\geq 0$, $i = 1, \ldots, m$, \begin{equation}\label{ineq:dca} \begin{array}{l} f(x^{\,\nu+1}) - \left[\,g(x^{\,\nu}) + \nabla g(x^{\,\nu})^T(x^{\,\nu+1}-x^{\,\nu})\,\right] + \displaystyle\frac{c}{2}\,\|x^{\,\nu+1} - x^{\,\nu}\|^2 \\[0.15in] \hspace{1pc} \leq \, f(x) - \left[\,g(x^{\,\nu}) + \nabla g(x^{\,\nu})^T(x-x^{\,\nu})\,\right] + \displaystyle\frac{c}{2}\,\|x - x^{\,\nu}\|^2. \end{array} \end{equation} Let $\{x^{\,\nu+1}\}_{\nu\in \kappa}$ be a subsequence of $\{x^{\,\nu}\}$ that converges to $x^{\,\infty}$. Then $x^{\,\infty}\in F$. Since each $\mathcal{A}_{\,i\,}(x^{\,\nu})$ is finite, we may assume without loss of generality that the selected $j_{\,i}^{\,\nu}\in \mathcal{A}_{\,i\,}(x^{\,\nu})$ are independent of $\nu$ for any $i= 1,\ldots, m$ on this subsequence, i.e., there exists $\bar{j}_{\,i}$ such that $\bar{j}_{\,i} = j_{\,i}^{\,\nu}$ for all $i = 1, \ldots, m$ and all $\nu \in \kappa$. For all $x\in X$ satisfying $(b^{\,i\,\bar{j}_i})^Tx + \beta_{\,i\,\bar{j}_i}\geq 0$, the inequality \eqref{ineq:dca} holds. Taking limit of $\nu(\in \kappa)\to +\infty$, we obtain that $\bar{j}_{\,i}\in \mathcal{A}_{\,i\,}(x^{\,\infty})$ for $i = 1, \ldots, m$, and for all $x\in X$ satisfying $(b^{\,i\,\bar{j}_i})^Tx + \beta_{\,i\,\bar{j}_i}\geq 0$, \[ f(x^{\,\infty}) - g(x^{\,\infty}) \,\leq \, f(x) - [\,g(x^{\,\infty}) + \nabla g(x^{\,\infty})^T(x-x^{\,\infty})\,], \] which, by Lemma \ref{lemma:weak B}, yields that $x^{\,\infty}$ is an A-stationary point of the problem \eqref{eq: dc2}. \end{proof} \subsection{Solving the subproblem of the dc algorithm} \label{sec:QP sub} Given $\bar{z} \triangleq (\bar{w}, \bar{\beta}, \bar{\sigma}^\pm)$ and a positive constant $c > 0$, the strongly convex objective of the subproblem of the dc algorithm in Step 1 for solving the problem \eqref{eq:OCE optimization problem 3} with $u$, $P_a$ and $P_b$ given in \eqref{eq:utility and sparsity} can be essentially written as \[\small \begin{array}{l} \lambda_a^N \, \displaystyle{ \sum_{i=1}^p } \, \left[ \, \phi_i^a | \, a_{\,i} \, | - \displaystyle{ \frac{d \rho_i^a(\bar{a}_i)}{da_i} } \, ( \, a_i - \bar{a}_i \, ) \, \right] + \lambda_{\beta}^N \, \displaystyle{ \sum_{i=1}^p } \, \left[ \, \phi_i^{\beta} | \, \beta_{\,i} \, | - \displaystyle{ \frac{d \rho_i^{\beta}(\bar{\beta}_i)}{d\beta_i} } \, ( \, \beta_i - \bar{\beta}_i \, ) \, \right] +\\[0.2in] \displaystyle{ \frac{1}{N} } \, \displaystyle{ \sum_{i=1}^N } \, \displaystyle{ \frac{{\cal Z}^-_i \sigma^-_i}{\pi(A_{\,i} \,|\, X^{\,i})} } - \displaystyle{ \frac{1}{|{\cal N}_+|} } \displaystyle{ \sum_{i \in {\cal N}_+} } \, \displaystyle{ \frac{{\cal Z}^+_i \sigma^+_i}{\pi(A_{\,i} \,|\, X^{\,i})} } + \displaystyle{ \frac{c}{2}} \, \displaystyle{ ||z - \bar{z}||^2} + \\ [0.2in] \displaystyle{ \frac{1}{2 \, \pi(A_{\,i} \,|\, X^{\,i})} } \, \, \bigg\{(1-\xi_1)\left[ \, \max(0,t_i) + \sigma_i^- \, \right]^2 + (1+\xi_2)\left[ \, \max(0,-t_i) + \sigma_i^- \, \right]^2 - \\[0.2in] 2(2-\xi_1+\xi_2)\bar{\sigma}^-_i(\sigma_i^- - \bar{\sigma}^-_i) - 2\left[\,(1-\xi_1)\max(0,\bar{t}_i) - (1+\xi_2)\max(0,-\bar{t}_i)\,\right]\,(t_i - \bar{t_i})\bigg\}, \end{array} \] where $z \triangleq ( w,\beta,\sigma^\pm )$ with $\beta \in \mathbb{R}^{p}$, $w \triangleq ( a,b ) \in S$, $\sigma^- \in \mathbb{R}^{N}$ and $\sigma^+ \in \mathbb{R}^{|{\cal N}_+|}$. The above objective function involves the convex, non-differentiable terms $| a_i |$, $| \beta_i |$, $\left[ \, \max(0,t_i) + \sigma_i^- \, \right]^2$, and $\left[ \, \max(0,-t_i) + \sigma_i^- \, \right]^2$; the latter two squared terms also make the objective non-separable in the $w$ and $\sigma^-$ variables. All these features make the linear inequality constrained subproblem seemingly complicated. One way to solve this subproblem is via the dual semismooth Newton approach, as discussed in a recent paper \cite{cui2018composite}. In fact, by introducing auxiliary variables \[\left\{\begin{array}{ll} t_i^+ \, = \, \max(t_i, 0), & t_i^- \, = \, \max(-t_i, 0),\\[0.1in] a_i^+ \, = \, \max(a_i, 0), & a_i^- \, = \, \max(-a_i, 0),\\[0.1in] b_i^+ \, = \, \max(b_i, 0), & b_i^- \, = \, \max(-b_i, 0), \end{array}\right. \] we may write \[ {\cal Z}_i - w^{\, T} \widehat{X}^{\, i} \, = \, t_i = t_i^+ - t_i^-, \hspace{1pc} |a_i| = a_i^+ + a_i^-, \hspace{1pc} |\beta_i| = \beta_i^+ + \beta_i^-. \] Therefore, an alternative approach for solving \eqref{eq:OCE optimization problem 3} is to transform it into a standard quadratic programming problem with the additional variables $(t_i^+, t_i^-, a_i^+, a_i^-,b_i^+, b_i^-)$ such that it can be solved by many efficient quadratic programming solvers. {In terms of statistical consistency, as long as the tuning parameters $\lambda_a^{\, N}$ and $\lambda_\beta^{\, N}$ go to $0$ when $N$ goes to infinite, the minimizer of the empirical objective function \eqref{eq:OCE binaryoptimization problem} might converge to the minimizer of the corresponding population problem under some regularity conditions (\cite{van2000asymptotic}). If we allow rates of tunning parameters going to 0 faster than $\frac{1}{\sqrt{n}}$, then the convergence rate of empirical minimizers may be $\frac{1}{\sqrt{n}}$ under some regularity conditions. Similar ideas could be borrowed from \cite{knight2000asymptotics}, although their considered settings are different from ours. The convergence results in our settings are more complicated than those standard cases since the empirical loss function here is non-convex and non-smooth. } \section{Numerical Experiments} \label{sec:Numerical} In this section, we demonstrate the effectiveness of the proposed IDR-CDE in finding optimal IDRs via three synthetic examples. The subproblem of the dc algorithm, being equivalent to a quadratic programming problem, is solved by the commercial solver Gurobi with an academic license. All the numerical results are run in Matlab on Mac OS X with 2.5 GHz Intel Core i7 and 16 GB RAM. We use piecewise linear affine function given by \eqref{CVaR} with $\xi_1 = 0, \xi_2 = 0.5$ in all the experiments, which is equivalent to estimating the optimal IDR that maximizes $\text{CVaR}_{0.5}({\cal Z})$. In practice, users can decide their own utility functions and values $\xi_1$, $\xi_2$ based on the specific problem settings. If one believes there may have high risks for inappropriate decisions and wants to control the risk of higher-risk individuals, it would be better to use robust utility functions such as the piecewise affine utility function. We consider a binary-action space in a randomized study with $\pi(A_{\,i} = \pm 1 \,|\, X_i) = 0.5$. All the tuning parameters such as $\lambda_b^N$ and $\lambda_{\beta}^N$ are selected via $10$-fold-cross-validation that maximizes the following average of the empirical ${\cal O}_{(u, {\cal F})}^{\,d}({\cal Z})$, which is defined as \[ \widehat{\cal O}_{(u, {\cal F})}^{\, \widehat{d}}({\cal Z}) \,\triangleq \, \displaystyle{ \frac{\displaystyle{ \sum_{i \in {\cal N}} } \, \left[ \, \widehat{\alpha}(X_i) + u({\cal Z}_i - \widehat{\alpha}(X_i)) \, \right] \, \displaystyle{ \frac{{\rm I\!I}(A_i = \widehat{d}(X_i))}{\pi(A_i | X_i)}}}{\displaystyle{ \sum_{i \in {\cal N}} } \, \displaystyle{ \frac{{\rm I\!I}(A_i = \widehat{d}(X_i))}{\pi(A_i | X_i)} }}} \, . \] Specifically, we divide the training data into 10 groups. For each fold, we estimate the optimal IDR $\widehat{d}(X)$ using 9 groups of the data (the training set) for a pre-specified series of tuning parameters $\lambda_b^N$ and $\lambda_{\beta}^N$ and then compute $\widehat{\cal O}_{(u, {\cal F})}^{\, {d}}({\cal Z})$ on the remaining group of data (the test set). The best tuning parameters are the ones that lead to the largest values of $\widehat{\cal O}_{(u, {\cal F})} ^{\, \widehat{d}}({\cal Z})$. The so-obtained parameters are then employed to re-compute the optimal IDR using the entire set of data. We compare our approach with three existing methods under the expected-value function framework ${\rm I\!E}^{\, d}[{\cal Z}]$. The first one is a model-based method called $l_1$-PLS \cite{qian2011performance} that first fits a penalized least-square regression with covariate function $(1, X, A, X \circ A)$ on ${\cal Z}$ to estimate ${\rm I\!E}[{\cal Z} | X, A = a]$, and then select the action with the largest ${\rm I\!E}\,[\,{\cal Z} \,|\, X, A = a\,]$, where $X \circ A$ denotes the element-wise product. The second one is a classification-based method called residual weighted learning (RWL) \cite{zhou2017residual} that consists of two steps: (1) fitting a least-square regression on ${\cal Z}_i$ with covariates $\widehat{X}_i$ to compute the residual $r_i$ for each data point in order to remove the main effect; (2) applying the support vector machine with truncated loss to compute the optimal IDR with each data point weighted by $r_i$. The third one is the direct learning (DLearn) method \cite{qi2017} that lies between the model-based and the classification-based method, where the optimal IDR is directly found by weighted penalized least square regression on ${\cal Z} A$ with covariates $\widehat{X}$, based on the fact that \[{\rm I\!E}\,[\,{\cal Z} \,|\, X, A = 1\,] - {\rm I\!E}\,[\,{\cal Z} \,|\, X, A = -1\,] = {\rm I\!E}\,\left[ \, \displaystyle{ \frac{{\cal Z}\, A}{\pi(A | X)} } \, | \, X \, \right].\] The simulation data are generated by the model \begin{equation*} {\cal Z} = m(X) + h(X)A + \varepsilon, \end{equation*} where $m(X)$ is the main effect, $h(X)$ is the interaction effect with treatment $A$, and $\varepsilon$ is the random error. We consider the same main effect and interaction effect functions: $m(X) =1+X_1+X_2$ and $h(X) = 0.5+X_1-X_2+X_3$ respectively, but various types of {asymmetric} error distributions under three simulation scenarios:\\[0.05in] (1) $\log(\varepsilon)$ follows a normal distribution with mean 0 and standard deviation 2;\\[0.05in] (2) the random error $\varepsilon$ follows a Weibull distribution with scale parameter $0.5$ and shape parameter $0.3$;\\[0.05in] (3) $\log(\varepsilon)$ follows a normal distribution with mean 0 and standard deviation $2|1+X_1+X_2|$. The above scenarios address heavy right tail distributions to test the robustness of different methods. In particular, the log-normal distribution is frequently used in the finance area, the Weibull distribution is commonly considered in survival analysis of clinical trials, and the third scenario considers a heterogeneous error distribution depending on covariates. {In all our simulation studies, the error distributions are asymmetric}. The training sample size is set to be 100 and 200, and the number of covariates $p$ is fixed to be $10$. Each covariate is generated by uniform distribution on $[-1,1]$. In Table~\ref{tab:time}, we list the average computational time and the iteration numbers of the dc algorithm for solving the problem \eqref{eq:OCE optimization problem 3} with $\lambda_a^N=0.1$ and $\lambda_{\beta}^N=0.1$ over 100 simulations. One can see that the proposed algorithm is very efficient and robust for solving the empirical IDR problem. \begin{table}[h] \centering \small \scalebox{1}{ \begin{tabular}{@{}lccccc@{}} \addlinespace \toprule & \multicolumn{2}{c}{$n=100$} & \phantom{a} & \multicolumn{2}{c}{$n=200$} \\ \cmidrule{2-3} \cmidrule{5-6} & time & iteration numbers & & time & iteration numbers \\ \midrule Scenario 1 & 0.70 & 18 && 2.10 & 20 \\ \midrule Scenario 2 & 0.79& 18 && 2.08 & 20 \\ \midrule Scenario 3 & 0.68 & 16 && 1.88 & 18 \\ \bottomrule \end{tabular} } \caption{\small The average computational times (in seconds) and dc iteration numbers for $p = 10$.} \label{tab:time} \end{table} The comparisons of the four methods for finding optimal IDRs over 100 replications are based on the following four criteria:\\[0.05in] (1) the misclassification error rate on the test data (this is possible since the optimal IDR under our simulation settings is known, which is $\text{sign}(0.5 + X_1-X_2+X_3)$);\\[0.05in] (2) the empirical average of outcome under the decision rule over test data, which is defined as \[ \widehat{{\rm I\!E}}^{\, d} \, \left[ \, {\cal Z} \, \right] \, = \, \displaystyle{ \frac{\displaystyle{ \sum_{i \in {\cal N}_1} } \, \displaystyle{ \frac{{\cal Z}_i\, {\rm I\!I}(A_i = \widehat{d}(X_i))}{\pi(A_i | X_i)} }}{\displaystyle{ \sum_{i \in {\cal N}_1} } \, \displaystyle{ \frac{{\rm I\!I}(A_i = \widehat{d}(X_i))}{\pi(A_i | X_i)} }}}, \] where ${\cal N}_1$ is the index of test data set. This value evaluates the expected outcome of ${\cal Z}$ if the action assignment follows the estimated decision rules $\widehat{d}(X)$;\\[0.05in] (3) the empirical $50\%$ quantile of ${\cal Z}_i{\rm I\!I}(A_i = \widehat{d}(X_i))$ on the test data;\\[0.05in] (4) the empirical $25\%$ quantiles of ${\cal Z}_i{\rm I\!I}(A_i = \widehat{d}(X_i))$ on the test data.\\[0.1in] The test data in each scenario are independently generated with size 10,000. \begin{table}[h] \centering \small \scalebox{1}{ \begin{tabular}{@{}lccccc@{}} \addlinespace \toprule & \multicolumn{2}{c}{$n=100$} & \phantom{a} & \multicolumn{2}{c}{$n=200$} \\ \cmidrule{2-3} \cmidrule{5-6} & Misclass. & Value & & Misclass. & Value \\ \midrule \multicolumn{6}{c}{Scenario 1} \\ \midrule DLearn & 0.48(0.02) & 8.36(0.09) && 0.47(0.02) & 8.5(0.07)\\ $l_1$-PLS & 0.45(0.01) & 8.46(0.06) && 0.45(0.01) & 8.58(0.09)\\ RWL & 0.42(0.01) & 8.53(0.07) && 0.42(0.01) & 8.59(0.07)\\ IDR-CDE & \bf{0.25}(0.01) & \bf{8.98}(0.07) && \bf{0.17}(0.01) & \bf{9.15}(0.08)\\ \midrule \multicolumn{6}{c}{Scenario 2} \\ \midrule DLearn & 0.44(0.02) & 5.82(0.06) && 0.44(0.02) & 5.74(0.06)\\ $l_1$-PLS & 0.42(0.01) & 5.89(0.05) && 0.4(0.01) & 5.86(0.05)\\ RWL & 0.39(0.01) & 5.95(0.04) && 0.37(0.01) & 5.96(0.04)\\ IDR-CDE & \bf{0.21}(0.01) & \bf{6.36}(0.04) && \bf{0.15}(0.01) & \bf{6.41}(0.04)\\ \midrule \multicolumn{6}{c}{Scenario 3} \\ \midrule DLearn & 0.5(0.02) & 3948.04(659.88) && 0.51(0.02) & \bf{26588.55}(13692.58)\\ $l_1$-PLS & 0.48(0.01) & \bf{4758.49}(801.06) && 0.5(0.01) & 26209.19(13702.62)\\ RWL & 0.48(0.01) & 4256.27(774.97) && 0.47(0.01) & 24463.43(13592.7)\\ IDR-CDE & \bf{0.24}(0.01) & 4113.85(934.74) && \bf{0.2}(0.01) & 25712.22(13473.72)\\ \bottomrule \end{tabular} } \caption{\small Average misclassification rates (standard errors) and average means (standard errors) of empirical value functions for three simulation scenarios over 100 runs. The best expected value functions and the minimum misclassification rates are in bold.} \label{tab:p50nonlinear} \end{table} \begin{table}[h] \centering \small \scalebox{1}{ \begin{tabular}{@{}lccccc@{}} \addlinespace \toprule & \multicolumn{2}{c}{$n=100$} & \phantom{a} & \multicolumn{2}{c}{$n=200$} \\ \cmidrule{2-3} \cmidrule{5-6} & $50\%$ quantile & $25\%$ quantile & & $50\%$ quantile & $25\%$ quantile\\ \midrule \multicolumn{6}{c}{Scenario 1} \\ \midrule DLearn & 2.64(0.04) & 1.17(0.04) && 2.67(0.04) & 1.21(0.05)\\ $l_1$-PLS & 2.73(0.03) & 1.26(0.03) && 2.74(0.03) & 1.25(0.03)\\ RWL & 2.81(0.03) & 1.35(0.03) && 2.83(0.03) & 1.35(0.04)\\ IDR-CDE & \bf{3.17}(0.01) & \bf{1.81}(0.02) && \bf{3.26}(0.01) & \bf{1.99}(0.01)\\ \midrule \multicolumn{6}{c}{Scenario 2} \\ \midrule DLearn & 1.96(0.04) & 0.69(0.04) && 1.97(0.04) & 0.7(0.05)\\ $l_1$-PLS & 2.01(0.03) & 0.77(0.03) && 2.08(0.03) & 0.82(0.03)\\ RWL & 2.1(0.03) & 0.85(0.03) && 2.16(0.03) & 0.92(0.03)\\ IDR-CDE & \bf{2.47}(0.01) & \bf{1.36}(0.02) && \bf{2.53}(0.01) & \bf{1.47}(0.01)\\ \midrule \multicolumn{6}{c}{Scenario 3} \\ \midrule DLearn & 2.22(0.05) & 1.02(0.05) && 2.2(0.05) & 1.01(0.05)\\ $l_1$-PLS & 2.3(0.03) & 1.04(0.03) && 2.24(0.03) & 0.99(0.03)\\ RWL & 2.29(0.03) & 1.07(0.03) && 2.31(0.03) & 1.09(0.03)\\ IDR-CDE & \bf{2.81}(0.01) & \bf{1.73}(0.01) && \bf{2.86}(0.01) & \bf{1.8}(0.02)\\ \bottomrule \end{tabular} } \caption{\small Results of average $25\%$ (standard errors) and $50\%$ (standard errors) quantiles of empirical value functions for three simulation scenarios over 100 runs. The largest $25\%$ and $50\%$ quantiles are in bold.} \label{tab:p10quantiles} \end{table} Several observations can be drawn from these simulation examples in Tables \ref{tab:p50nonlinear} and \ref{tab:p10quantiles}. First of all, our method under the IDR-CDE has the smallest classification error in choosing correct decisions compared with those under the criterion of expected outcome. Under the piecewise utility function, we emphasize more on improving subjects with relative low outcome, in contrast to focusing on average, which may ignore the subjects with higher-risk. As a result, in addition to misclassification rate, the $50\%$ and $25\%$ quantiles of expected-value functions are also the largest among all the methods. Secondly, the advantages of our method become more obvious if comparing the $25\%$ quantiles of the empirical value functions on the test data with $50\%$ quantiles. For example, in the second scenario, the $25\%$ quantiles of empirical value functions of our method are almost twice as large as those by DLearn. Another interesting finding is that in the last scenario, although the average empirical value functions of $l_1$-PLS and RWL are larger than those of our method, our method is indeed much better based on the misclassification error and the quantiles. One possible reason is that these methods under the expected value function framework only correctly identify the decisions for subjects in lower risk while ignoring subjects with potentially higher risk. The estimated optimal IDRs by those methods may lead to serious problems, especially in precision medicine when assigning treatments to patients. Although, on average, patients may gain benefits of following those decision rules, some patients may come across high risk, causing adverse events such as exacerbation in practice by using the recommended treatment using the standard criterion of expected outcome. {In terms of real data applications, there are several possibilities. For example, we can use the piecewise linear utility function to control the lower tails of outcomes for individual patient in AIDS or cancer studies. Another potential application is to use the quadratic utility function to take variance of each decision rules into consideration. The performance of the results by our method depends on the choice of the covariate-dependent $\alpha(X)$ and the utility function $u$. We leave these as the future work.} \section{Acknowledgements} \label{sec:Acknowledgements} The authors thank two referees and the associate editor for their careful reading of the paper and for the comments that have helped to improve the quality of this paper. \bibliographystyle{siamplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,974
Benita Matofska Benita Matofska set up a global sharing movement after completing Common Purpose Meridian in London. She shares how her experience with Common Purpose helped her to do this. Meridian had a significant impact on me as a thought leader - it was during my time on the course that I developed my thinking around the Sharing Economy. Back in the March of 2010 when I was still at Enterprise UK I had the honour of sharing a stage with Desmond Tutu and at the time I pledged to build a movement and business to tackle a global problem. In May 2010 I started Meridian. A session on Personal Brand at a Mediation centre was significant because of the insights on the importance of developing your own unique personal brand. At the time I had been starting to figure out what I would like to do around sharing in terms of accessing shared resources- back then it was not called the Sharing Economy. That session crystallised what I wanted to do and it was a springboard to get me to where I am now. When I started the Meridian course I was planning to leave my role to do something else, it was a time of change. One session that had a profound effect on me was at an institute for sexual offenders where we looked at leadership in complex situations and the journey people take to end up where they are. We had the opportunity to interview some ex-offenders and looked at leadership in the organisation and the successful creation of a therapeutic environment to find positive solutions for violent offenders. I met people finding innovative ways to lead beyond authority and head upstream. There were conversations around creating change and forging ahead. This inspired me to swim upstream and set about building both a global movement, The People Who Share and Compare and Share, a social business. Leading beyond authority is what I do. I was the first person to use the #SharingEconomy hashtag on Twitter. After Meridian, I began to forge ahead with the business idea. It has all happened over the last five years. I finished Meridian in October 2010 and that time was a period of developing my thinking around the idea. At the end of the programme we wrote pledge cards which were sent to us six months later. I had talked about sharing in mine and working on something in that arena. Work began on developing both the movement and the business between May and October 2010. By the time I got my pledge card back I was well on my way to creating something! My work in the Sharing Economy has been an organic success. I wrote the only comprehensive definition of it and am known as an award-winning social entrepreneur and global expert. The first step was setting up the The People Who Share as a not-for-profit and in 2013 I launched Compare and Share, the world's first comparison marketplace of the Sharing Economy. We help both people and companies to access shared goods and services. I pioneered National Sharing Day then Global Sharing Day which have been a huge success - reaching over 100 million worldwide and trending globally on Twitter. Compare and Share has gone from success to success, becoming the first company to triple its target on an equity based crowdfunding platform, Crowdcube, winning a number of awards including the Mass Challenge, the Google for Entrepreneurs BlackBox Programme in Silicon Valley for born global start-ups, the 2015 Natwest Venus Entrepreneur of the Year Award, the Tech City News Elevator Pitch Award for social good in 2014, the Ogunte Women's Social Leadership Awards - Best Social Business Leader UK & World 2013 and the Cabinet Office / Nesta Innovation in Giving Award among others. At the heart of everything I do is an ethos of 'doing business differently', I'm driven by my purpose which is to build a sustainable Sharing Economy and built around the sharing of human, intellectual and physical resources that puts people and planet centre stage. Compare and Share has even developed its own legal form which embeds our purpose. Anyone involved with the business understands our commitment to both profit and purpose. This is a business that is a good deal for everyone. You can find out more about Benita and start sharing here. Follow Benita on Twitter
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,794
"The diversity of human family should be the cause of love and harmony, as it is in music where many different notes blend together in the making of a perfect chord. If you meet those of different race and color from yourself, do not mistrust them and withdraw yourself into your shell of conventionality, but, rather, be glad and show them kindness. Think of them as different colored roses growing in the beautiful garden of humanity, and rejoice to be among them."
{ "redpajama_set_name": "RedPajamaC4" }
4,715
Aardrijkskunde Lindberg (Bayerischer Wald), gemeente in de Duitse deelstaat Beieren Lindberg (Reisbach), stadsdeel van Reisbach in de Duitse deelstaat Beieren Personen met de achternaam Lindberg Bosse Lindberg, Zweeds schaker Chad Lindberg, Amerikaans acteur Christian Lindberg, Zweeds componist, dirigent en trombonist Christina Lindberg, Zweeds naaktmodel, soft-pornoactrice en journalist David C. Lindberg, Amerikaanse wetenschapshistoricus en professor Janne Lindberg, Fins voetballer en voetbalcoach Knut Lindberg, Zweeds sprinter en speerwerper Magnus Lindberg, Fins componist en pianist Marcus Lindberg, Zweeds voetballer Oskar Lindberg, Zweeds componist en organist Stig Lindberg (ontwerper), Zweeds kunstenaar en industrieel ontwerper Zie ook Lindbergh (doorverwijspagina) Achternaam
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,398
Q: Determinant of Green function Let $f : \Bbb Z^d \to \Bbb R$. The discrete average value $\overline{f(x)} : \Bbb Z^d \to \Bbb R$ is defined by $$\overline{f(x)}:=\frac{1}{2d}\sum_{y \in S_x}f(y)$$ where $S_x$ are the neighbors of $x$ in $\Bbb Z^d$. We define the discrete Laplacian $\Delta f:\Bbb Z^d \to \Bbb R$ as follows : $$ \Delta f(x):=\overline{f(x)}-f(x).$$ Let $D:=\{x_1,\dots,x_n\} \subset \Bbb Z^d$. We denote by $\Delta_D F$, the function that is equal to $\Delta F$ in $D$ and $0$ outside of $D$. Now, let $(X_n)_{n≥0}$ be a simple random walk in $\Bbb Z^d$, with law denoted by $P_x$ when it is started at $x$. Let $τ = τ_D := \inf\{n ≥ 0 : X_n \notin D\}$ be its first exit time from $D$. We define the Green's function $G_D$ in $D$ to be the function defined on $D × D$ by $$G_D(x,y)=\Bbb E_x\bigg(\sum_{k=0}^{\tau-1}\mathbb 1_{\{X_k=y\}} \bigg) $$ $$ =\sum_{k=0}\#\{\text{paths }x \to y \text{ in } k \text{ steps within } D\}\times\big(\frac{1}{2d}\big)^k$$ The Green's function $G_D$ is a symmetric function defined on $D × D$, so that it can be also written as a square symmetric matrix $(G_D(x_i,x_j))_{i,j≤n}$. (It can be shown that $G_D^{-1}=-\Delta_D$.) I already have a proof involving Gaussian Free Field (which I don't know about) but what I want to do is to give a proof of the fact that $$\det G_D =G_D(x_1,x_1)×\det G_{D\backslash \{x_1\}} $$ only by using the fact that $$G_D(x,y) =\sum_{k=0}\#\{\text{paths }x \to y \text{ in } k \text{ steps within } D\}\times\big(\frac{1}{2d}\big)^k$$ Since $G_D$ is a symmetric matrix, I tried to get its eigenvalues but I feel its clearly a wrong way. I also thought of doing it very formally : since the determinant is a sum of product of entries and $G_D(x_1,x_1)$ is also a sum, I tried to show that the terms were equal by hand but it was overly complicated. Last attempt was some kind of induction argument (cf proofs about adjacency matrix in graph theory) but didn't lead me anywhere since $D$ is finite. Is there a way out ?
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,880
{"url":"http:\/\/math.stackexchange.com\/questions\/915665\/integration-of-function-inverse","text":"# Integration of function (inverse)\n\nDoes anyone know how do I start on part (b)?\n\nThanks\n\n-\n\nHint: let $u = \\sqrt{1-x}$\nor $x = \\sin^2t$\nFor integrands that involve a square root (and the argument of the square root isn't a sum or difference of squares, which usually suggests a trigonometric substitution), it often works simply to substitute using the radical quantity as the new variable, e.g., in this case $u = \\sqrt{1 - x}$, in part because this often rationalizes the expression. Rearranging gives $x = 1 - u^2$ and so $dx = -2u du$.\nAlternately, you can substitute $x = \\sin^2 t$.","date":"2016-04-30 03:42:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9310652017593384, \"perplexity\": 257.73242030000205}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-18\/segments\/1461860111592.84\/warc\/CC-MAIN-20160428161511-00197-ip-10-239-7-51.ec2.internal.warc.gz\"}"}
null
null
Петала́с () — необитаемый остров в Ионическом море, принадлежащий Греции. Является крупнейшим островом в группе мелких необитаемых островов Эхинады у устья Ахелооса, у западного побережья исторической области Акарнания — юго-западной части Средней Греции. Наивысшая точка — гора Аспрохордос высотой 251 м над уровнем моря, севернее находится гора Еро-Петалас высотой 209 м над уровнем моря. Административно относится к сообществу Айия-Эффимия в общине Сами в периферийной единице Кефалиния. Остров закрывает с запада одноимённую бухту. На южной оконечности острова, мысе Аспро расположен маяк. У восточной оконечности острова расположена церковь Святого Николая (Айос-Николаос). Уильям Лик отождествлял Петалас с островом Дулихий, упоминаемым Гомером в Каталоге кораблей. Дельта Ахелооса, лагуны Этоликон и Месолонгион, устье Эвиноса, острова Эхинады и остров Петалас входят в сеть охранных участков на территории ЕС «Натура 2000». Экосистема хотя и подверглась сильному влиянию деятельности человека, всё же имеет значительную экологическую ценность, по этой причине водно-болотные угодья включены в Рамсарскую конвенцию. Площадь острова 1335 акров (540 га). На острове растёт около 4000 оливковых деревьев. Остров выставлен на продажу за 44,6 млн долларов США. Примечания Ионические острова
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,604
Your pictures don't have to stay digital with our easy to use in-store and online click and collect service. Your pictures don't have to stay digital with our Wonder Photo Shop. Keep your memories alive by creating beautifully designed collages. These prints are a perfect way todisplay groups of photos. Whether you're celebrating a wedding, holiday or new family member , we've made it easier than ever to create a timeless collage with your photographs. If you are planning a party or commercial events, large signs or banners is just what you need. Just pop in to our store or you can use our online services to send us your art work or your logo. We than can produce 2 sizes of high quality banners 24"x48" or 36"x 24" to suit your requirements. Panoramic prints are perfect for capturing landscapes and cityscapes. These come in a range of sizes to suit your needs, ranging from 4"x10" up to 24"x 48" Release those stunning landscape images in your devices and get them printed today! Collate your precious memories and create one large masterpiece to treasure forever. Whether its your dream holiday or special event. Our kiosk makes this process really simple using our bespoke templates. We have a wide range of products to suit your style. Sizes of the frames starts at 12"x12" or you can create a large photo panel up to size 24"x30". Beautifully hand crafted, our canvas wraps are a great way to turn any photo into a work of art. The prints are stretched over a 1 1/2 deep wood frame and come ready to hang! Our sizes range from 10" x 12" to 30" x 20", and we also do square sizes. If you are an artist or photographer you deserve our highest quality print options.
{ "redpajama_set_name": "RedPajamaC4" }
5,914
package mmarquee.automation.controls; import mmarquee.automation.AutomationException; import mmarquee.automation.Element; import mmarquee.automation.pattern.PatternNotFoundException; import mmarquee.automation.pattern.SelectionItem; /** * The Control supports the methods of the SelectionItemPattern. * * @author Mark Humphreys * Date 21/09/2016. */ public interface ImplementsSelect extends Automatable, CanRequestBasePattern { /** * Selects the element. * @throws AutomationException Automation library error * @throws PatternNotFoundException Failed to find pattern */ default void select() throws AutomationException, PatternNotFoundException { final SelectionItem selectionItemPattern = requestAutomationPattern(SelectionItem.class); if (selectionItemPattern.isAvailable()) { selectionItemPattern.select(); return; } throw new PatternNotFoundException("Cannot select item"); } /** * Whether the element is selected. * @return True if selected * @throws AutomationException Automation library error * @throws PatternNotFoundException Failed to find pattern */ default boolean isSelected() throws AutomationException, PatternNotFoundException { final SelectionItem selectionItemPattern = requestAutomationPattern(SelectionItem.class); if (selectionItemPattern.isAvailable()) { return selectionItemPattern.isSelected(); } throw new PatternNotFoundException("Cannot query selection state"); } /** * Adds to the selection. * @throws AutomationException Automation library error * @throws PatternNotFoundException Failed to find pattern */ default void addToSelection() throws AutomationException, PatternNotFoundException { final SelectionItem selectionItemPattern = requestAutomationPattern(SelectionItem.class); if (selectionItemPattern.isAvailable()) { selectionItemPattern.addToSelection(); return; } throw new PatternNotFoundException("Cannot extend selection"); } /** * Removes from the selection. * @throws AutomationException Automation library error * @throws PatternNotFoundException Failed to find pattern */ default void removeFromSelection() throws AutomationException, PatternNotFoundException { final SelectionItem selectionItemPattern = requestAutomationPattern(SelectionItem.class); if (selectionItemPattern.isAvailable()) { selectionItemPattern.removeFromSelection(); return; } throw new PatternNotFoundException("Cannot reduce selection"); } /** * Gets the selection container. * @return The selection container * @throws AutomationException Automation library error * @throws PatternNotFoundException Failed to find pattern */ default Element getSelectionContainer() throws AutomationException, PatternNotFoundException { final SelectionItem selectionItemPattern = requestAutomationPattern(SelectionItem.class); if (selectionItemPattern.isAvailable()) { return selectionItemPattern.getSelectionContainer(); } throw new PatternNotFoundException("Cannot get the selection container"); } }
{ "redpajama_set_name": "RedPajamaGithub" }
582
According to Macalister, CIIC 1, 6, Fn. 1, the locality is called Kilmannia in the O.S. map, but the stone is often referred to as the "Kilmannin stone". It was discovered [when?] by one Constabulary-Sergeant Lyons, "built into the wall of an old church". It was [when?] removed to the National Museum, Dublin, where it is marked no. "XLIII". Size according to Macalister, CIIC: 4'0" x 1'8" x 1'0" Macalister, CIIC 1, 6 (draft). The stone is "inscribed on all four angles, grouped two and two, in each pair up-top-down". - "Rhys turns MAQI into MOžTI, and takes in a small fracture on the angle following 2G .. to turn the last word into LUGEDEC." - In the second part, a "fracture follows the O, large enough to contain two scores or notches". The second but last letter is an S according to Rhys, but S2 and S3 are joint, so that they resemble the U-Forfid; this would yield a reading BUBEL or BUABEL. There is no A after the final L, but "only a fracture .. on the B-surface". Should we suppose a "retroverse reading" Decuns o Micill `Diaconus de [ecclesia sancti] Michael' - "which is worth recording, if only as a warning that nothing is to be expected from this expedient"? The Q can be restored with confidence between A and I. MAQI with -I is preserved "by tradition" as against apocopated name forms. The inscription has to be dated into the end of the 6th cent. because of the syncopated LUGADDON < LUGUAIDONAS. The DD is written with a large gap in between so that ом = C can be excluded. - The sinister angle of the back side could not be photographed because of the arrangement of the stone in the basement of the National Museum. JRSAI 37, 1907, 61: Rhys.
{ "redpajama_set_name": "RedPajamaC4" }
6,226
package de.uniulm.omi.cloudiator.sword.domain; import static com.google.common.base.Preconditions.checkArgument; import static com.google.common.base.Preconditions.checkNotNull; import com.google.common.base.MoreObjects; /** * Created by daniel on 01.12.14. */ public class CloudCredentialImpl implements CloudCredential { private final String user; private final String password; CloudCredentialImpl(String user, String password) { checkNotNull(user, "user is null"); checkNotNull(password, "password is null"); checkArgument(!user.isEmpty(), "user is empty"); checkArgument(!password.isEmpty(), "password is empty"); this.user = user; this.password = password; } @Override public String user() { return this.user; } @Override public String password() { return this.password; } @Override public boolean equals(Object o) { if (this == o) { return true; } if (o == null || getClass() != o.getClass()) { return false; } CloudCredentialImpl that = (CloudCredentialImpl) o; if (!user.equals(that.user)) { return false; } return password.equals(that.password); } @Override public int hashCode() { int result = user.hashCode(); result = 31 * result + password.hashCode(); return result; } @Override public String id() { return user; } @Override public String toString() { //todo: do not disclose password return MoreObjects.toStringHelper(this).add("user", user).add("password", password) .toString(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,611
Department of English, Comparative Literature, and Linguistics FACULTY & OFFICE HOURS Major in English Minor in Creative Writing Major in Comparative Literature Minor in Comparative Literature Major in Linguistics Minor in Linguistics English Graduate Program Applying for English Graduate English MA Program Requirements M.A. Project English Faculty by Area of Specialty TA and ISA Programs Linguistics Graduate Program Applying for Linguistics Graduate Linguistics MA Program Requirements Linguistics Faculty by Area of Specialty English Advising Video For undergraduates, the English department offers a program of study called the Single Subject Matter Prep Program in English (the SSMP), which has been approved by the California Commission on Teacher Credentialing. Students who complete this program will have demonstrated subject matter competency in English as required for a California Single Subject Credential. The program also satisfies the English Department requirements for a major in English. Students who wish to teach in secondary settings can choose to complete the CSET in place of completing the SSMP. There are four sections of the English exam of the CSET, and information and practice exams can be found on the CSET website. For more information on the SSMP, see an English department adviser by clicking here. For a list of SSMP courses, click on the link below. Please note that there is no need to enroll in the SSMP program formally. By following the checklist, you will complete an English degree and earn SSMP certification. Single Subject Preparation Program courses For information on the credential program, including application instructions, visit this website for further details. This site is maintained by Department of English, Comparative Literature, and Linguistics. Last Published 11/22/22 To report problems or comments with this site, please contact cpope@fullerton.edu.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
50
Jason Taylor ist der Name folgender Personen: *Jason Taylor (Eishockeyspieler) (* 1967), kanadischer Eishockeyspieler Jason Taylor (Fußballspieler, 1970) (* 1970), walisischer Fußballspieler Jason Taylor (Rugbyspieler) (* 1971), australischer Rugbyspieler und -trainer Jason Taylor (Footballspieler) (* 1974), US-amerikanischer Footballspieler Jason Taylor (Fußballspieler, 1985) (* 1985), englischer Fußballspieler Jason Taylor (Fußballspieler, 1987) (* 1987), englischer Fußballspieler Jason Taylor (Tennisspieler) (* 1994), australischer Tennisspieler Jason deCaires Taylor (* 1974), britischer bildender Künstler
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,271
Q: Execute batch file from Java code I tried to run the batch file from java code, bat file is running but in bat file internally it is calling proxyServer.js file.This .js file is not running. Below is the sample code. try { String path="cmd /c start C:\\AxoneES_Viewers_Integration-2016Q3-SNAPSHOT_201609021003\\AxoneViewers.bat"; Runtime rn=Runtime.getRuntime(); Process pr=rn.exec(path); } catch(IOException ex) { System.out.println("Exception Found"); } As i mentioned internally it is calling .js file where they declared path of the .js file, below is the code , which declared the path. cd viewers\apps\maxq\ node proxyServer.js while running the bat file it is not able to find the path of .js file and it not running. Can you help me is there any approach to execute the bat file from java. A: Try editing the batch file to use absolute file names instead of relative file names. You mentioned your batch file has the lines cd viewers\apps\maxq\ node proxyServer.js Change the first line to something like C:\full\path\to\viewers\apps\maxq\. Check out this answer for the difference between absolute and relative file names.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,348
\section{Motivation} As of today, maintenance and servicing of permanent growing mobile networks require manual intervention of qualified network engineers to ensure a constant high level of service quality which is very time and cost consuming. Operators need to locate and mitigate different type of problems in the network, such as hardware faults, link failures, performance optimization and security attacks, to only name a few. The European Commission (EC) and others highlighted already in 2014 that mobile operators spending three times operational expenditures (OPEX) than capital expenditures (CAPEX) \cite{Ref:EU}\cite{Ref:Aviat}. With the emerging Fifth Generation (5G) \cite{Ref:NGMN} and therefore gaining heterogeneous and complex networks this numbers will increase further. The network function virtualization (NFV) \cite{Ref:NFV} and software defined network (SDN) \cite{Ref:SDN} principles of the future core networks allow more flexibility but also enable the option to automate many of that maintenance and management tasks. The {EU} {H2020} {SELFNET} \cite{Ref:SELFNET}\cite{Ref:SELFNET2} project is addressing these challenges and developing a self-organized 5G network management framework through virtualized and software defined networks to support these new technologies and reduce OPEX. The SELFNET framework mainly consists of: 1.) SDN/NFV sensors - that extract metrics from the network infrastructure, 2.) Aggregation Layer - for preprocessing and storage of the collected data, 3.) Autonomic Engine, which is divided into a.) Rule based and b.) AI based decision engine, 4.) Action Enforcer and Orchestrator - that implement the Decision from the Autonomic Management, 5.) SDN/NFV actuators - e.g. FlowControl Agent; The SELFNET consortium focuses on three main use-cases: SELF-PROTECTION, with its capabilities against distributed cyber-attacks SELF-HEALING that handles system failures and SELF-OPTIMIZATION to improve dynamically the performance of the network and the QoE of the users. While self-optimization and self-protection already carried out and described in \cite{Ref:ICCC2017}\cite{Ref:PIMRC2017} and \cite{Ref:Botnet}, we focus in this paper on the self-healing capabilities. The paper is organized as follows: Section 2 gives an overview of the intelligence concept and used methods. The set-up of the test-bed for experimentation and evaluation of the Intelligence is presented in Section 3. Finally Section 4 concludes this paper. \section{Dataset Generation and Evaluation} In this section the set-up of the test-bed and methods to create the training dataset and evaluation of the proposed Intelligence concept is described. \subsection{Test Set-up} In this first iteration of NFV self-healing experimentation, the following virtualized test environment was created to generate the initial learning dataset. \begin{figure*}[!ht] \centering \includegraphics[width=0.95\textwidth]{Fig_SHTopo.pdf} \caption{Self-Healing Test environment.}\label{Fig:SHTopo} \end{figure*} Basically it consists of six networks and eight virtual machines. The external network is the connection between the network under test (on the right side) and the LAB network where also the Management and Autonomic Server is located, see Figure \ref{Fig:SHTopo}. The Management Server serves as Gateway to the Internet and simultaneously as centralized storage for the monitored data with the help of the open-source tool Zabbix \cite{Ref:Zabbix}. Since the foundation for the virtualization is OpenStack \cite{Ref:OS}, the OpenStack management network is used to maintaining the virtual machines as well as have metric collection strictly separated from the network under test. The virtual machine called Source, connected to the NetOne network, acts as Web Server, in this first version running the Iperf3 \cite{Ref:iperf} tool in server mode that generates traffic to the two virtual machines in the DMZ1 network, the so called Destination VMs, that acts as user and running as well the Iperf3 toll in client mode. The Traffic needs to pass the Firewall between, that connect NetOne and DMZ1. This Firewall NFV is running in general the OpenVSwitch \cite{Ref:OVS} with a WebInterface as FrontEnd to inject new rules and was original developed in the 5GPPP Charisma Project \cite{Ref:Charisma}. These constellation is duplicated in the NetworkTwo and DMZ2, with the difference that this Firewall2 is miss configured, to have a direct reference. On both Firewalls a Zabbix agent is running to collect metrics, such as CPU, memory, harddisk usage and traffic in/out. These basic metrics will be used to create the profile of the NFV to find out if it is working in a proper manner. The miss configuration in this example is a fictive memory leak, that is in this test generated with the stress-ng tool, a to load and stress a computer systems. It is started with the parameters to allocate every minute 200~MB of memory and keep it. Since the NFV only has 1~GB memory in total, it only needs a few minutes to occupy almost 100~\%, see Figure~\ref{Fig:MEMGraph}. \begin{figure*}[!ht] \centering \includegraphics[width=0.8\textwidth]{Fig_FWProfile.pdf} \caption{Firewall Metrics - CPU, Harddisk and Memory (with and without cache) usage.}\label{Fig:MEMGraph} \end{figure*} \section{Artificial Intelligence} \subsection{Approach} We employ a semi-supervised learning approach. The training process uses a dataset which contains measurements taken during known good operation of the VNF. This training set can be extended during operation: Whenever an anomaly is detected, the problem is reported. The operator can then inspect the operation of the VNF and, in case of the problem report being a false positive, can opt to add the corresponding data to the training set thereby enabling the detector to know about this newly occurring region of regular operation. \subsection{Techniques} \subsubsection{Pre-processing} We employ a specialized normalization method, which transforms each feature into a random variable following a particular normal distribution. This is achieved by computing for each feature $X_{k}$ the empirical distribution function $\hat{F}_{k}$ accross the training data and using a (continuous, strictly increasing) modification $F_{k}$ of this function as an estimator for the distribution of $X_{k}$. The data is then transformed on a per-feature basis by applying the transformation \begin{equation} X_{k} \mapsto \Phi^{-1}(F_{k}(X_{k})) \end{equation} where $\Phi^{-1}$ denotes the quantile function of the standard normal distribution which can be expressed in terms of the inverse error function $\erf^{-1}$ as \begin{equation} \Phi^{-1}(p) = \sqrt{2} \, \erf^{-1}(2 \, p - 1) \end{equation} The above mentioned modification of the empirical distribution function is performed in the following way: First the steps of the empirical distribution function are replaced by linear segments continuously joining together at the ends and sloped anti-proportionally to the step length. Then the cumulative distribution function of Gaussian random variable is mixed into the resulting function in order to ensure strictly monotonic behaviour even in regions outside of the data given by the training set. \subsubsection{Autoencoder} The actual outlier detection is performed using an ensemble of autoencoders $E_{l}$ of different shapes learning at different rates. An autoencoder is a function $E_{l}$ mapping values $x$ to approximations $\tilde{x}$ of $x$. Their parameters $p$ are adjusted by a stochastic gradient descent type learning algorithm to achieve the best possible approximation. The goodness of fit is measured using the square of the standard Euclidean distance $\lVert \tilde{x} - x \rVert_{2}^2$. The training algorithm then optimizes the parameters $p_{l}$ to minimize the cost \begin{equation} \sum_{x \in \mathcal{X}} c(x, p_{l}) = \sum_{x \in \mathcal{X}} \lVert E_{l}(x, p_{l}) - x \rVert_{2}^2 \end{equation} where $\mathcal{X}$ denotes the training dataset. This optimization is performed for each individual autoencoder $E_{l}$. \subsubsection{Thresholding} The cost functions of each of the autoencoders are used as estimators of the degree of anomality of the data. After training, the autoencoders are evaluated on a set of validation data. The ensemble is then reduced to only the best $m$ encoders. For each incoming datapoint, the costs w.r.t.\ these remaining encoders are computed. If the cost w.r.t.\ an encoder $E_{l}$ is more than $\beta$ times the average cost of this particular encoder on the validation dataset, the datapoint is marked as suspicuous. If more than $\alpha$ of the encoders mark the datapoint as suspicuous, it is considered an anomaly. \section{Conclusion} In this paper, we presented a semi-supervised learning concept for monitoring NFV Instances and perform malfunction detection. The set-up of the test environment for the dataset creation was described as well as the methods for the machine learning approach. In a next step we will improve the algorithms in terms of robustness and extend the test environment to simulate more malfunctions and enlarge the environment to a more complex version.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,833
Q: Build an 3D-Character-Generator in WebGL (client side) with 3D post-rendering (server side) I don't know exactly where to ask, so I'll just ask here. We would like to build an online 3d-character generator for a project, which should run completely in the browser and give the user the possibility to create a character via WebGL (or similar) (selection of predefined assets, like clothes, hair, etc...). After creation we would like to render it in high-res (blender, unity, etc..) on the server side to get a better version of the avatar. We have already written and/or adapted scripts for face-regonation, but the rest is the hard part and I have no idea what skills to look for to find the right one for this job. Maybe one of you has already done a similar project or knows a bit about it. I would be grateful for any help.
{ "redpajama_set_name": "RedPajamaStackExchange" }
856
{"url":"https:\/\/math.stackexchange.com\/questions\/3189685\/on-the-definition-of-degree-of-closed-subschemes","text":"On the definition of degree of closed subschemes\n\n$$\\underline {Background}$$:We know that,for a projective variety $$X \\subset\\mathbb{P}^{n}=(\\mathbb{K}^{n+1}-{0})\/\\sim$$\n\nwe define , degree($$X$$)=$$(r!)$$.(leading coefficient of the hilbert polynomial of $$X$$)\n\n$$\\underline {Question (1)}$$:What is the definition of degree of a closed subscheme $$X$$ of $$Proj(K[x_0,....,x_n])$$?\n\nCan we define the same thing for closed subscheme?\n\n$$\\underline {Question (2)}$$:what can be said about degree of $$0$$-dimensional subcsheme?\n\nSince there are only a finitely many points in a $$0$$ dimensional subscheme ,can we say that in this case degree is same as cardinality?\n\nFinally is there any reference where they talk explicitly about the definition of degree of a closed subscheme in $$Proj(K[x_0,....,x_n])$$(maybe with some example)\n\nAny help from anyone is welcome.","date":"2019-06-16 05:08:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 12, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8931712508201599, \"perplexity\": 122.71622901547929}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-26\/segments\/1560627997731.69\/warc\/CC-MAIN-20190616042701-20190616064701-00344.warc.gz\"}"}
null
null
You can follow the discussion on hisNoise For Chrome, Pink, Brown and White Noise in Your Browser without having to leave a comment. Cool, huh? Just enter your email address in the form here below and you're all set.
{ "redpajama_set_name": "RedPajamaC4" }
5,970
Q: Can a matplotlib figure object be pickled then retrieved? I am trying to pickle a matplotlib Figure object to be able to regenerate the graph with x and y data and labels and title at a later time. Is this possible? When trying to use open and dump to pickle I get this traceback: #3rd Party Imports and built-in import random import matplotlib.pyplot as plt import pickle as pk #Initializing lists for x and y values. Voltage trim and current measure is our x and y in this case. voltage_trim = range(100, 150) current_meas = [] # A change of parameters modelled by a multiplier in this case multiplier = range(1,4) # Initializing lists to store the output current if wanted current_storage = [] # Required for Matplotlib plt.close() plt.ion() #Required method call in order for interactive plotting to work # SPECIFY GRAPH fig1 = plt.figure() ax = fig1.add_subplot(1,1,1) # Creates an axis in order to have multiple lines plt.title('Voltage Trim Vs Current \nsome fancy sub-title here') plt.xlabel('Voltage Trim / V') plt.ylabel('Current Measured/ A') plt.grid(True) color_choices = ['k', 'g','r','b','k','c', 'm', 'y'] # Add more according to number of graphs # MAIN TEST LOOPS for this_mult in multiplier: current_meas = [] # Clears the output list to graph with different multipier #Enumerates input in order to manipulate it below for index, value in enumerate(voltage_trim): #Generating random current values in this case current_meas.append(random.randint(0,10)*this_mult) print index ,'Generating results...' print index, value # Increments index so that lists match dimensiosn and graphing is possible index += 1 # Optional real time plotting function, comment out if not wanted live_plotting = ax.plot(voltage_trim[:index], current_meas, color = color_choices[this_mult])#,label = 'Line'+str(this_mult) # A pyplot method that pauses the loop, updates the graph and continues to enable for real time graphing, set to small number to be considered insignificant plt.pause(1e-124) # Deletes the real time line to save memory in the loop live_plotting[0].remove() # This is the actual storage of plot objects, specify legend label here, and all other arguments the same ax.plot(voltage_trim, current_meas,color = color_choices[this_mult],marker = 'o', label = 'Line'+str(this_mult)) #Stores the measured current (A) current_storage.append(current_meas) #Calls legend - must be in outer loop plt.legend() f = open('testt','wb') pk.dump(fig1, f) f.close() A: Yes. Try import pickle import matplotlib.pyplot as plt file = open('myfile', 'wb') fig = plt.gcf() pickle.dump(fig, file) file.close() Then to read file = open('myfile', 'rb') pickle.load(file) plt.show() file.close()
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,522
using System; namespace Acquaintance.Logging { public class SilentLogger : ILogger { public void Debug(string fmt, params object[] args) { } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Warn(Exception e, string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public void Error(Exception e, string fmt, params object[] args) { } } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,101
\section{Introduction} Quantum-public-key cryptography, where the public keys are quantum-mechanical systems, is a largely unexplored area of problems. Various cryptographic primitives can be defined in this context (e.g., digital signatures, identification schemes, encryption schemes, etc) which aim at different goals (e.g., integrity, confidentiality, etc) \cite{qphGC01,BCGST02,ACJ06,Buh01,qphIM08,Got05,KKNY05,HKK08,Kak06,Nik08}. Of particular interest are quantum-public-key encryption (QPKE) schemes \cite{Got05,KKNY05,HKK08,Kak06,Nik08} which facilitate the communication between many users over insecure channels. Typically, a legitimate user participating in such a QPKE scheme has to choose a random secret (private) key, and prepare the public key in a state that is in accordance with the private key. Many copies of the public-key state can be created in this manner and become available to any potential sender in an authenticated manner, e.g. via a key-distribution center, whereas the corresponding private key is never revealed and is used by the receiver for decryption only. In a nutshell, QPKE combines the provable security of quantum-key distribution (QKD) protocols \cite{RevQKD} with the flexibility of conventional public-key encryption schemes, facilitating thus the {\em key distribution} and the {\em key management} in large networks \cite{Nik08,book1}. Key distribution and key management are crucial issues associated with the security and the efficient operation of large networks, and cannot be solved efficiently in the context of QKD (followed by a classical symmetric cryptosystem) or quantum direct communication (QDC) protocols such as \cite{PP1,PP2,PP3}. The main reason is that, by construction, the protocols of QKD and QDC are point-to-point protocols, and thus the total number of secure links and keys scales quadratically with the number of users in the network. This power law can be improved if the communications are performed via a key distribution center (KDC) which possesses all the secret keys. In this case, however, the center becomes an attractive target, while a compromised KDC renters immediately all communications insecure. In QPKE schemes on the other hand, the KDC deals with the public keys only, whereas the private keys are in posession of the legitimate users \cite{remark5}. The study of QPKE schemes is also of fundamental importance for the field of quantum cryptography because of the quantum trapdoor one-way functions, which are essential ingredients not only for the development of efficient encryption schemes, but also for many other cryptographic primitives (digital signatures, fingerprinting, zero-knowledge protocols, etc) \cite{Buh01, qphGC01, qphIM08, ACJ06, book1, book3}. The mere fact that in QPKE schemes many copies of the public keys become available, allows an eavesdropper to launch new strategies that go beyond QKD and QDC protocols (e.g., see \cite{NikIoa09}). Although the actual state of the public key is unknown to an adversary, the multiple copies, when processed judiciously, may reveal more information on this state than a single copy. Hence, a security analysis of a particular QPKE scheme has to address questions related to the lengths of the private and the public keys, as well as the number of public-key copies that can become available before the entire cryptosystem is compromised. Clearly, such questions are intimately connected to specific aspects of QPKE, which are not present neither in QKD nor in QDC protocols. The QPKE scheme of \cite{Nik08} is rather intuitive as it relies on single-qubit rotations. The public key consists of a number of qubits that are prepared at random and independently in some unknown state. A message can be encrypted in one of the public keys by rotating appropriately the corresponding qubit states and the resulting cipher-state is subsequently sent for decryption. Due to its simplicity, this scheme may serve as a theoretical framework for addressing questions pertaining to the power and limitations of QPKE as well as its robustness against various types of attacks. In this context it has been shown recently that any deterministic QPKE requires randomness in order to be secure against a forward-search attack \cite{NikIoa09}. Furthermore, in contrast to the classical setting, a QPKE scheme can be used as a black box to build a new randomized bit-encryption scheme that is no longer susceptible to this attack. Here we discuss for the first time a symmetry that underlies the scheme of \cite{Nik08} and that reduces considerably the information that an eavesdropper might extract from the copies of the public key. Subsequently, we analyze the security of the protocol against attacks that aim at the encrypted message and that rely on individual projective measurements on the qubits of the public key(s) and of the cipher state. It is shown that the performance of such attacks can be slightly worse than the performance of the forward-search attack \cite{NikIoa09} which requires complicated quantum transformations that are beyond today's technology. We like to emphasize, that discussions on the scheme of \cite{Nik08} with an appropriate choice of the parameters also apply on a specific so-called ping-pong protocol \cite{PP2} that pertains to the category of the so-called quantum direct communication (QDC) protocols. The different context has to be taken into account to achieve meaningful statements. This paper is organized as follows: In Sec.~\ref{sec2} basic aspects of the recently introduced quantum-public-key protocol of \cite{Nik08} are summarized. The influence of symmetric eavesdropping strategies on upper bounds of the probability for an eavesdropper to guess correctly the private key or the encrypted message are investigated in Sec.~\ref{Sec3}. Security aspects of the private key are discussed in Sec.~\ref{sec3a} on the basis of Holevo's bound. In Sec.~\ref{sec3b} an attack on encrypted messages is studied, which pertains to individual projective measurements on the qubits involved. As a main result it is shown that Eve's success probability converges to the value of one half exponentially with the numbers of qubits in which the message is encrypted with a scale depending on the number of its publicly available copies of the public key. Furthermore, it turns out that the success probability of this attack differs only slightly from the already known optimal probability of successful state estimation by means of collective measurements. In addition, as discussed in \ref{sec3c}, the resulting lower bound of the security parameter of the public-key protocol is also close to the previously derived security parameter of the forward-search attack of Ref. \cite{NikIoa09}. Finally, in Sec. \ref{sec3d} a symmetry-test attack with projective measurements is explored, which attacks the message directly and makes use of only a single copy of the public-key quantum state and the corresponding cipherstate. \section{The protocol} \label{sec2} For the sake of completeness, let us summarize briefly the main ingredients of the protocol proposed in \cite{Nik08}. Each user participating in the cryptosystem generates a key consisting of a private part and a public part, as determined by the following steps. \begin{enumerate} \item Choice of a random positive integer $n\gg 1$. Additional limitations on $n$ will be derived in the following section. \item Choice of a random integer string ${\bf k}$ of length $N$ i.e., ${\bf k}=(k_1,k_2,\ldots,k_N)$. Each integer $k_j$ is chosen at random and independently from $\mathbb{Z}_{2^n}$, and thus it has a uniform distribution over $\mathbb{Z}_{2^n}$. \item The classical key ${\bf k}$ is used for the preparation of the $N$-qubit public-key state \begin{subequations} \label{public_key} \begin{equation} \ket{\Psi_{\bf k}(\theta_n)}=\bigotimes_{j=1}^N\ket{\psi_{k_j}(\theta_n)} \label{public_key_N} \end{equation} where \begin{eqnarray} \ket{\psi_{k_j} (\theta_n)} &\equiv& \cos \left( \frac{k_j \theta_{n}}{2} \right) \ket{0_{z}} + \sin \left(\frac{k_j\theta_{n}}{2} \right) \ket{1_{z}},\phantom{aa} \label{public_key_j} \end{eqnarray} while $\{\ket{0_z},\ket{1_z}\}$ denote the eigenstates of the Pauli operator $\hat{\sigma}_z\equiv\ket{0_z}\bra{0_z}-\ket{1_z}\bra{1_z}$, which form an orthonormal basis in the Hilbert space of a qubit. The Bloch vector associated with (\ref{public_key_j}) is given by ${\bf R}_j(\theta_{n}) =\cos(k_j \theta_{n})\hat{z} + \sin(k_j \theta_{n})\hat{x}$ with $\hat{x}$, $\hat{z}$ denoting unit vectors and with \begin{equation} \theta_n = \pi / 2^{n-1} \end{equation} denoting the elementary angle of rotations around the axis with unit vector $\hat{y}$. \item The private (secret) part of the key is ${\bf k}$, while the public part is $\{n,N,\ket{\Psi_{\bf k}(\theta_n)}\}$. \end{subequations} \end{enumerate} Note that, since each $k_j$ is distributed uniformly and independently over $\mathbb{Z}_{2^n}$, the random state $\ket{\psi_{k_j}(\theta_{n})}$ is uniformly distributed over the set of states \begin{equation} \mathbb{H}^{(n)}=\{\ket{\psi_{k_j}(\theta_{n})}|k_j\in \{0,\ldots,2^n-1\}\}. \end{equation} The state of the $j$th public-key qubit $\ket{\psi_{k_j}(\theta_{n})}$ is known if the corresponding Bloch vector (or equivalently the angle $k_j\theta_{n}$) is known. The full characterization of the angle $k_j\theta_n$ requires $n$ bits of information. In general, a legitimate user should never reveal his private key, whereas he can produce at will as many copies of the public key as needed. The number of public-key copies $T^\prime$ \cite{remark7}, however, should be kept sufficiently small relative to $n$ (the precise relation will be discussed in Sec. \ref{sec3a}), so that the map \begin{equation} \label{map} {\bf k} \mapsto \{T^\prime~\textrm{copies of}~\ket{\Psi_{\bf k}(\theta_{n})}\} \end{equation} is a quantum one-way function by virtue of Holevo's theorem \cite{Nik08,NikIoa09}. The one-way property of the map (\ref{map}) is essential for the definition of the public-key encryption in the present framework. Suppose now that Bob wants to communicate a binary plaintext ${\bf m}$ to Alice. The users have agreed in advance on two encryption operators $\hat{\cal E}_{0}$ and $\hat{\cal E}_{1}$ for encryption of bit $''0''$ and $''1''$, respectively. The key point here is that the bits of the plaintext (message) are assumed to be encrypted independently on public qubits that have been prepared at random and independently (see discussion above). Hence, for the sake of simplicity and without loss of generality, we can focus on the encryption of a one-bit message $m\in\{0,1\}$. As discussed in \cite{Nik08,NikIoa09}, in this case the protocol is not secure when the bit is encrypted on the state of a single qubit. However, it has been shown in the context of a forward-search attack, that the robustness of the protocol increases considerably if $m$ is encoded in a randomly chosen $s$-bit codeword ${\bf w}$ with Hamming weight of parity $m$ which is subsequently encrypted on $s$ public qubits \cite{remark6}. Correspondingly, the analysis of the following section pertains to a one-bit message, which is encrypted in the parity of an $s$-bit codeword with $s$ playing the role of a security parameter. For the encryption of the one-bit message $m\in\{0,1\}$, Bob chooses at random a codeword ${\bf w}\equiv(w_1,w_2,\ldots,w_s)$ of parity $m$, and obtains an authenticated copy \cite{remark1} of Alice's public key ($T^\prime -1$ public keys still remain publicly available). The codeword is encrypted by applying independent successive encryption operations on the first $s$ public qubits. The resulting (quantum) ciphertext is thus the $s$-qubit state \begin{equation} \label{cipherstate} \ket{X_{{\bf k}, m}(\theta_n)}= \bigotimes_{j=1}^s\hat{\cal E}_{w_j}\ket{\psi_{k_j}(\theta_n)}= \bigotimes_{j=1}^s\ket{\chi_{k_j,w_j}(\theta_n)}, \end{equation} to be referred to hereafter as cipherstate. In this spirit, for the encryption of an $L$-bit message requires a public-key of length $N\geq Ls$. The cipherstate is sent to Alice who can obtain the message by means of a decryption procedure whose details are not essential for our purposes in this work. We only note here the crucial property that the encryption operations do not depend on Alice's private key, but the decryption operators do. Moreover, to allow for a simple decoding we assume that \begin{equation} \label{encryption} \hat{\cal E}_{w_j}\ket{\psi_{k_j}(\theta_n)}\to \ket{\psi_{k_j}(\theta_n+w_j\pi)}, \end{equation} for $w_j\in\{0,1\}$ \cite{remark8}. The primary objective of an eavesdropper􏰁 (Eve) in the context of QPKE is to recover the plaintext from the cipher state intended for Alice. On the other hand, there is always a more ambitious objective pertaining to the recovery of the private key from Alice's public key. A cryptosystem is considered to be broken with accomplishment of either of the two objectives, but in the latter case the adversary has access to all of the messages sent to Alice (see also related discussion in \cite{Nik08,book1}). It is essential therefore to ensure secrecy of the private key, before we discuss the secrecy of a message. In Sec. \ref{sec3a}, we derive restrictions on the parameters $n$ and $T^\prime$ so that the map (\ref{map}) is a quantum one-way function, and thus the recovery of the private key from the public keys is prevented. As far as the encryption of the message (or equivalently the codeword) is concerned, we note that, in view of Eqs. (\ref{public_key_j}) and (\ref{encryption}), the two possible values of the $j$th bit of the codeword $w_j\in\{0,1\}$ are essentially encrypted in orthogonal eigenstates of a basis, which is rotated relative to the basis $\{\ket{0_z},\ket{1_z}\}$ by an unknown angle $k_j \theta_{n}$. This means that the cipher-qubit state is parallel ($w_j=0$) or antiparallel $(w_j=1)$ to the corresponding public-qubit state. Thus, in the following analysis we consider two different classes of eavesdropping strategies, which aim at the encrypted message. The first class involves attacks that explore the symmetry between the public-key state and the cipher state to reveal the message. The other class pertains to attacks that extract information on the public key (and thus on the basis on which the message has been encoded), so that the message can be recovered by means of a projective measurement on the estimated basis. Clearly, for this second class of attacks the probability of successful decryption is expected to increase with the information gain on the public-key state. \section{Symmetric Eavesdropping Strategies} \label{Sec3} In a single run of the protocol the fixed quantities are the secret key ${\bf k}$ (and thus the public key), as well as the codeword ${\bf w}$. In general, for a given eavesdropping strategy, the probability of successful eavesdropping in a single run of the protocol $P(\textrm{suc}|{\bf k},{\bf w})$ differs from the corresponding probability obtained by averaging over all possible values of ${\bf k}$, i.e., \begin{eqnarray} \bar{P}(\textrm{suc}|{\bf w})&=&\sum_{\bf k}P({\bf k})P(\textrm{suc}|{\bf k},{\bf w})\nonumber\\ &=&\frac{1}{2^{nN}}\sum_{\bf k}P(\textrm{suc}|{\bf k},{\bf w}), \label{P_av_k} \end{eqnarray} where for the last equation we have used the fact that ${\bf k}$ is uniformly distributed over $\{0,1\}^{nN}$. The one-bit message $m$ is encoded at random on one of the $2^{s-1}$ possible $s$-bit codewords with parity $m$ (examples are given in \cite{Nik08,NikIoa09}). Hence, the conditional probability for the codeword ${\bf w}$ to occur, given a particular value of $m\in\{0,1\}$, is $P({\bf w}|m)=2^{-(s-1)}$. However, from the point of view of an adversary, both values of $m\in\{0,1\}$ are equally probable and thus $P({\bf w})= \sum_{m}P({\bf w}|m)2^{-1}=2^{-s}$ i.e., the codewords have a uniform distribution over $\{0,1\}^s$. Therefore, the eavesdropping strategies we are going to discuss are symmetric with respect to all possible codewords \cite{remark2}, and thus we also have $ \bar{P}(\textrm{suc})\equiv 2^{-s}\sum_{\bf w}P(\textrm{suc}|{\bf w})=P(\textrm{suc}|{\bf w}). $ \subsection{Eve's point of view} \label{sec3a} Our first task is to find out how much information Eve may extract from $\tau$ available copies of the $j$th public qubit, and investigate the conditions under which the security of the private key is guaranteed. From Eve's point of view, the state of the $j$th public qubit is uniformly distributed over $\mathbb{H}^{(n)}$, with the corresponding {\em a priori} probability being $2^{-n}$. Hence, the density operator describing the state of $\tau$ copies of the $j$th public qubit is \begin{eqnarray} \label{rho_prior_m} \rho_{j,\rm prior}^{(\tau)}&=& \frac{1}{2^n} \sum_{k_j^\prime=0}^{2^n-1}\left [\ket{\psi_{k_j^\prime} (\theta_n)} \bra{\psi_{k_j^\prime} (\theta_n)}\right ]^{\otimes \tau}\nonumber\\ &=&\frac{1}{2^n} \sum_{k_j^\prime=0}^{2^n-1} \ket{\Phi_{k_j^\prime}^{(\tau)} (\theta_n)} \bra{\Phi_{k_j^\prime}^{(\tau)} (\theta_n)}, \end{eqnarray} where $\ket{\Phi_{k_j^\prime}^{(\tau)} (\theta_n)}:=\ket{\psi_{k_j^\prime} (\theta_n)}^{\otimes\tau}$. In the space of $\tau$-qubit states we have $\tau+1$ different subspaces each of which is spanned by all ${\cal B}(\tau,l)=\binom{\tau}{l}$ eigenstates with the same Hamming weight $l$, i.e. the same number of qubits which are in the state $\ket{1_z}$. Within one of these subspaces, say ${\cal S}_l$, we can define the fully symmetric state \[ \ket{l}=\sum_{i=1}^{{\cal B}} \ket{i}_l/{\sqrt{\mathcal{B}(\tau,l)}}, \] where the sum runs over all the $\tau$-qubit eigenstates with the same Hamming weight $l$. The problem can be formulated entirely in terms of these $(\tau+1)$-symmetric states $\{\ket{l}:l=0,1,\ldots, \tau\}$ \cite{remark3}. Using Eq. (\ref{public_key_j}), we have \begin{subequations} \label{Psi_tau} \begin{eqnarray} \ket{\Phi_{k_j^\prime}^{(\tau)} (\theta_n)} &=&\sum_{l=0}^{\tau} \sqrt{{\cal B}(\tau,l)}f_{\tau,l}(k_j \theta_n)\ket{l}, \end{eqnarray} with \begin{equation} f_{\tau,l}(k_j \theta_n)=\left [\cos\left(\frac{k_j \theta_{n}}{2}\right)\right ]^{\tau-l} \left[ \sin\left(\frac{k_j\theta_{n}}{2}\right)\right ]^{l}. \end{equation} \end{subequations} Thus the density operator of Eq. (\ref{rho_prior_m}) reads \begin{subequations} \label{rho_tau} \begin{eqnarray} \label{rho_pri_m_2} \rho_{j,\rm prior}^{(\tau)}&=& \sum_{l,l^\prime=0}^{\tau} C_{l,l^\prime} \ket{l}\bra{l^\prime} \end{eqnarray} with \begin{eqnarray} \label{C_llp} C_{l,l^\prime}&=& \frac{1}{2^n}\sqrt{{\cal B}(\tau,l){\cal B}(\tau,l^\prime)} \sum_{k_j^\prime=0}^{2^n-1} f_{\tau,l}(k_j \theta_n)f_{\tau,l^\prime}^\star(k_j \theta_n).\phantom{aaw} \end{eqnarray} \end{subequations} In the appendix \ref{app1} we provide additional information on the form of the {\em a priori} density operator $\rho_{j,\rm prior}^{(\tau)}$ as well as on some observations regarding its rank and eigenvalues. What we have so far, however, suffices to provide an upper bound on the von Neumann entropy $S[\rho_{j,\rm prior}^{(\tau)}]$ for any values of $\tau$ and $n$. In particular, instead of saying that $\tau$ copies of the $j$th public-key qubit are distributed, we can say that one copy of a larger $(\tau+1)$-dimensional system becomes publicly available. Hence, we have \begin{equation} \label{entropy_pri_1} S[\rho_{j,\rm prior}^{(\tau)}]\leq\log_2(\tau+1). \end{equation} The state described in Eq. (\ref{rho_prior_m}) is a convex "classical" mixture of quantum states $\{\ket{\Phi^{(\tau)}_{k_j}(\theta_{n})}\}$ which are distributed with probabilities $p_j=2^{-n}$. Albeit pure, the states $\ket{\Phi^{(\tau)}_{k_j}(\theta_{n})}$ are not mutually orthogonal. As a result the von Neumann entropy for the density operator $\rho_{j,\rm prior}^{(\tau)}$ is strictly smaller than the Shannon entropy of the corresponding probability distribution $H(p_j)=n$ \cite{book3}. The Holevo bound restricts Eve's average information gain $I_{\rm av}$ on the unknown state for $\tau$ copies. In particular, the information gain is upper bounded by $S[\rho_{j,\rm prior}^{(\tau)}]$, and in view of inequality (\ref{entropy_pri_1}) we obtain the result \begin{equation} I_{\rm av}\leq \log_2(\tau+1). \end{equation} On the other hand, one still needs $n$ bits of information to characterize completely the state of the $j$th qubit (which of course implies knowledge on the private key as well). So, as long as \begin{equation} \label{holevo2} n\gg \log_2(\tau+1), \end{equation} the one-way property of the map (\ref{map}) is guaranteed \cite{remark9}. Thus one can be confident that no matter what strategy Eve may choose, her information on each public-key qubit is very low. Despite the fact that Eve has almost no knowledge about the public key she may be able to decrypt an encrypted message successfully. This will be demonstrated in the next sections. In closing, we would like to emphasize that in \cite{Nik08,NikIoa09} the symmetries underlying the particular encryption scheme have not been taken into account and thus a larger upper bound on $I_{\rm av}$ was obtained suggesting that Eve can get up to $\tau$ bits of information from $\tau$ copies of the public key. However, this section demonstrates that the actual upper bound turns out to scale logarithmically with $\tau$ so that secrecy of the private key can be guaranteed already for significantly smaller values of $n$. Intuitively, this originates from the fact that the protocol restricts Eve by construction on the $(\tau+1)$-dimensional subspace of symmetric states for the $\tau$ copies of the $j$th public-key qubit. In appendix \ref{app1} we provide a tighter upper bound on Eve's information gain based on basic properties of the eigenvalues of $\rho_{j,\rm prior}^{(\tau)}$. \begin{figure*} \includegraphics[scale=0.7]{figure1v2.eps} \caption{ A posteriori probability distributions (given by Eqs. \ref{eq:apost}) for $T=8$ (a-e), $T=9$ (f-h), and various events $\{T_0^{(z)},T_0^{(x)}\}$: (a) $T_0^{(z)}=0$; (b,f) $T_0^{(z)}=2$; (c,g) $T_0^{(z)}=4$; (d,h) $T_0^{(z)}=6$; (e) $T_0^{(z)}=8$. } \label{fig1} \end{figure*} \subsection{Incoherent Projective Measurements} \label{sec3b} Eve knows that all of the qubit states lie on the $x-z$ plane of the Bloch sphere. Thus, she may try to deduce the message by means of projective measurements on the cipherstate as well as on all of the remaining $(T^\prime-1)$ copies of the public key \cite{remark4}. In the following, we assume that each qubit of the public key or of the cipher is measured independently. Indeed, given that the random state of each public-key qubit is chosen independently and that it is distributed uniformly over $\mathbb{H}^{(n)}$, it is reasonable to assume that there are no hidden patterns that Eve can take advantage of by attacking many qubits collectively. One possible strategy for Eve is to obtain an estimate of the public-key state (\ref{public_key}) by measuring half of the public keys on the (eigen)basis $\{\ket{0_z},\ket{1_z}\}$ of the Pauli operator $\hat{\sigma}_{z}$ and the other half on the (eigen)basis $\{\ket{0_x},\ket{1_x}\}$ of the Pauli operator $\hat {\sigma}_x\equiv\ket{0_z}\bra{1_z}+\ket{1_z}\bra{0_z}$. In this way she can obtain an estimation on the $j$th public-qubit state or equivalently on its Bloch vector ${\bf R}_j$. It should be emphasized that such an attack essentially aims at the private key which, by construction, is in one-to-one correspondence with the public key. Although, condition (\ref{holevo2}) restricts Eve's information gain on the private key to negligible values, it cannot guarantee secrecy of the encrypted message. Hence, in an attempt to reveal the message she can measure the cipherstate on a basis defined by her guess on the corresponding public-qubit state. The main purpose of this section is to analyze this attack. Since all public-key qubits are equivalent and independent, let us start by focusing on one of them, i.e., the $j$th qubit which is measured in the basis $b\in\{z,x\}$ with $b=z(x)$ referring to the eigenbasis of the operator $\hat{\sigma}_z(\hat {\sigma}_x)$. The two possible outcomes of these measurements are "0" and "1" and they occur with probabilities \begin{eqnarray} p_{j,0}^{(b)}(k_j)=\cos^2\left ( \beta\frac{\pi}{4}-\frac{k_j\theta_{n}}2\right ),\quad p_{j,1}^{(b)}=1-p_{j,0}^{(b)}. \end{eqnarray} In this equation, $\beta\in\{0,1\}$ with the correspondences $b=z\rightarrow \beta=0$ and $b=x\rightarrow \beta=1$. Without loss of generality let us also assume that $T^\prime-1=2T$ \cite{remark4}, so that $T$ measurements are performed on the basis $b$. Let $T_0^{(b)}$ denote the number of outcomes "0" from measurements in the $b$ basis. In a single run of the protocol Eve obtains a particular set of outcomes $\{T_0^{(z)},T_0^{(x)}\}$ out of $T^2$ different possible combinations. We will first discuss how much information she can obtain about the public-qubit state (or equivalently the private key). \subsubsection{Information gain on the public-qubit state} \label{sec3b1} The {\em a posteriori} probability for the $j$-th qubit state is given by Bayes law \begin{subequations} \label{eq:apost} \begin{eqnarray} p_j(k_j^\prime|T_0^{(z)},T_0^{(x)})=\frac{q_j(T_0^{(z)},T_0^{(x)}|k_j^\prime)}{2^nq(T_0^{(z)},T_0^{(x)})}. \end{eqnarray} The probability for the outcome $\{T_0^{(z)},T_0^{(x)}\}$ to occur given the input state $\ket{\psi_{k_j^\prime}(\theta_n)}$ is \begin{eqnarray} q_j(T_0^{(z)},T_0^{(x)}|k_j^\prime)&=& \binom{T}{T_0^{(z)}}\binom{T}{T_0^{(x)}} \times\nonumber\\ & &\times\prod_b \left [p_{j,0}^{(b)}(k_j^\prime) \right ]^{T_0^{(b)}} \left [p_{j,1}^{(b)}(k_j^\prime)\right ]^{T-T_0^{(b)}},\nonumber\\ \end{eqnarray} and \begin{eqnarray} q(T_0^{(z)},T_0^{(x)})=\frac{1}{2^n} \sum_{k_j^\prime=0}^{2^{n}-1}q_j(T_0^{(z)},T_0^{(x)}|k_j^\prime). \end{eqnarray} \end{subequations} A sample of {\em a posteriori} probability distributions is depicted in Fig. \ref{fig1}, for $T=8$, $T=9$, and various events $\{T_0^{(z)},T_0^{(x)}\}$. Different public-qubit states may give rise to a certain combination $\{T_0^{(z)},T_0^{(x)}\}$ albeit with different probabilities. Hence, given a particular combination of "0" outcomes in the two bases, the conditional {\em a posteriori} probability distribution exhibits peaks for public-qubit states (as determined by $k_j \theta_{n}$), which are consistent with the particular event under consideration. Eve's information gain is given by the difference of the Shannon entropies of the distributions before and after the measurements, i.e., \begin{eqnarray} I_{\rm av}&=&H_{\textrm{prior}}- \aver{H_{\textrm{post}}}\nonumber\\ &=&n+\sum_{T_0^{(z)}}\sum_{T_0^{(x)}}q(T_0^{(z)},T_0^{(x)})\times\nonumber\\ & &\times \sum_{k_j^\prime=0}^{2^n-1}p_j(k_j^\prime|T_0^{(z)},T_0^{(x)})\log[p_j(k_j^\prime|T_0^{(z)},T_0^{(x)})]\phantom{aaa} \end{eqnarray} where we have summed over all possible outcomes for a given state. The entropy of the {\em a priori} uniform probability distribution is equal to the entropy of the private-key bit $k_j$. As depicted in Fig. \ref{fig2}, this information gain is slightly below the Holevo bound of Eq. (\ref{entropy_pri}) for $\tau=2T$, which is tighter than the bound of Eq. (\ref{entropy_pri_1}). It is worth mentioning that although the information gain depends weakly on $n$ the Holevo bound does not. In the subsequent discussion the choices of $n$ and $T$ are such that the inequality (\ref{entropy_pri}) and thus also inequality (\ref{entropy_pri_1}) are satisfied for $\tau=2T$. \begin{figure} \includegraphics[scale=0.33]{figure2.eps} \caption{(Color online) Entropy of {\em a priori} probability distribution (=entropy of private key), Holevo bound and information gain as functions of the number of public-key copies $2T$ that become available. The value of $n$ affects considerably the {\em a priori} probability distribution. The inset shows the difference between the Holevo bound and the information gain.} \label{fig2} \end{figure} \subsubsection{Probability of correct guessing the message} As we have seen in the previous subsection, a particular outcome $\{T_0^{(z)},T_0^{(x)}\}$ of a single run of the protocol allows Eve to update her knowledge on the public-qubit state she may have been given. From her point of view the {\em a posteriori} state pertaining to $\tau$ public-key copies is given by \begin{eqnarray} \label{rho_post_m} \rho_{j,\rm post}^{(\tau)}(T_0^{(z)},T_0^{(x)})&=&\sum_{k_j^\prime=0}^{2^n-1}p(k_j^\prime|T_0^{(z)},T_0^{(x)})\times\nonumber\\& &\times \ket{\Phi_{k_j^\prime}^{(\tau)} (\theta_n)} \bra{\Phi_{k_j^\prime}^{(\tau)} (\theta_n)}. \end{eqnarray} Tracing out $\tau-1$ copies, we obtain for the single-copy density operator the expression \begin{eqnarray} \rho_{j,\rm post}^{(1)}=\sum_{k_j^\prime} p(k_j^\prime|T_0^{(z)},T_0^{(x)})\ket{\psi_{k_j^\prime}(\theta_n)}\bra{\psi_{k_j^\prime}(\theta_n)}\label{eqn:rhopost1} \end{eqnarray} and the corresponding (estimated) Bloch vector \begin{eqnarray} \tilde{\bf R}_j&=&\sum_{k_j^\prime} p(k_j^\prime|T_0^{(z)},T_0^{(x)}) [\cos(k_j^\prime\theta_n) \hat{z} + \sin(k_j^\prime \theta_{n}) \hat{x}]\phantom{aaa} \end{eqnarray} with $||\tilde{\bf R}_j||\neq 1$. Recall now that the one-bit message $m$ is encoded in the parity of an $s$-bit codeword ${\bf w}$ which is subsequently encrypted on $s$ public qubits. Let us calculate first Eve's probability to recover the bit $w_j$ in a single run of the protocol by measuring the corresponding cipher qubit in the basis defined by $\tilde{\bf R}_j$. For the particular encryption under consideration (see Sec. II) her probability of success is $P(\textrm{suc}|w_j, k_j, T_0^{(z)}, T_0^{(x)})=\cos^2(\Omega_j/2)$ with $\Omega_j$ denoting the angle between the actual Bloch vector ${\bf R}_j$ and its estimation $\tilde{\bf R}_j$. Hence, we obtain \begin{eqnarray} P(\textrm{suc}|w_j, k_j, T_0^{(z)}, T_0^{(x)})=\frac{1}2+\frac{\tilde{\bf R}_j\cdot{{\bf R}_j}}{2||\tilde{\bf R}_j||}\quad\nonumber\\ =\frac{1}2+\frac{1}{2||\tilde{\bf R}_j||}\sum_{k_j^\prime} p(k_j^\prime|T_0^{(z)},T_0^{(x)}) \cos[(k_j^\prime-k_j)\theta_{n}] \end{eqnarray} with $ {{\bf R}_j}$ defined in Sec. \ref{sec2}. For a given public-qubit state various outcomes may occur albeit with different probabilities \begin{eqnarray} P(\textrm{suc}|w_j, k_j)=\sum_{T_0^{(z)}} \sum_{ T_0^{(x)}}& P(\textrm{suc}|w_j,k_j, T_0^{(z)}, T_0^{(x)})\times\nonumber\\&\times q( T_0^{(z)}, T_0^{(x)}|k_j). \end{eqnarray} \begin{figure} \includegraphics[scale=0.33]{figure3v2.eps} \caption{(Color online) Conditional probability $P(\textrm{suc}|w_j, k_j)$ for $n=10$ and various values of $T$.} \label{fig3} \end{figure} The typical behavior of $P(\textrm{suc}|w_j, k_j)$ with $k_j$ (or equivalently $k_j \theta_{n}$) is depicted in Fig. \ref{fig3} where we have an oscillation around the mean value \begin{eqnarray} \bar{P}(\textrm{suc}|w_j)=\frac{1}{2^n}\sum_{k_j}P(\textrm{suc}|w_j, k_j). \end{eqnarray} As we increase the number of public-key copies the amplitude of the oscillations becomes smaller and the mean value increases. In particular, we find that for $T>1$ \begin{eqnarray} \bar{P}(\textrm{suc}|w_j)\lesssim 1-\frac{1}{6T}:=U(T). \label{U_of_T} \end{eqnarray} \begin{figure} \includegraphics[scale=0.33]{figure4.eps} \caption{(Color online) Conditional probability $P(\textrm{suc}|w_j)$ for $n=10$ and various values of $T$.} \label{fig4} \end{figure} As depicted in Fig. \ref{fig4}, this performance is very close to the {\em optimal} probability of successful state estimation by means of collective measurements \cite{Der98} \begin{eqnarray} \bar{P}_{\rm opt}(\textrm{suc}|w_j)=\frac{1}2+\frac{1}{2^{2T+1}}\sum_{i=0}^{2T-1} \sqrt{\binom{2T}{i}\binom{2T}{i+1}} \end{eqnarray} which scales like \begin{eqnarray} \bar{P}_{\rm opt}(\textrm{suc}|w_j)\sim 1-\frac{1}{8T}. \end{eqnarray} Bagan {\em et al.} \cite{Bag02} have demonstrated that this upper bound can be saturated by means of individual measurements and our attack has similarities to their approach. Finally, for our subsequent discussion it is worth keeping in mind that $P(\textrm{suc}|w_j)$ does not depend on the actual value of the bit $w_j$ i.e., $P( \mathrm{suc} | w_j=0)=P( \mathrm{suc} | w_j=1)$. Up to now our results are referring to one bit of the codeword only and our task is to obtain the probability of success in guessing correctly the bit-message $m$ from the $s$-bit codeword ${\bf w}$. Since the message is encoded on the parity of the codeword, Eve succeeds even if she fails to predict correctly $\alpha$ out of $s$ bits with $\alpha$ even. Instead of considering her probability of success in a single run of the protocol, which is a rather complicated task, we concentrate in the following on her probability of success averaged over all possible public-qubit states (or equivalently private keys ${\bf k}$). As depicted in Fig. \ref{fig3}, for large $T$ the amplitude of the oscillations is at least an order of magnitude smaller than the mean. Hence, any conclusions based on the average probability of success are also expected to apply with good accuracy to a single run of the protocol. Since each bit of the codeword is encrypted separately in independently prepared public qubits, the averaging over all possible values ${\bf k}$ is straightforward. Thus, one obtains for the average probability of successful eavesdropping for a given message $m$ and codeword ${\bf w}$ \begin{subequations} \label{P_s_av1} \begin{eqnarray} \bar{P}_s( \mathrm{suc}|{m},{\bf w}) = \sum_{\substack{\alpha=0\\ {\rm even}}}^s \binom{s}{\alpha}& [1-\bar{P}( \mathrm{suc}| w_j)]^{\alpha} \times \nonumber\\&\times [\bar{P}( \mathrm{suc} | w_j)]^{s-\alpha}.\label{eqn:Ps} \end{eqnarray} Averaging over all possible equally probable codewords and messages we finally find \begin{eqnarray} \bar{P}_s( \mathrm{suc})=\bar{P}_s( \mathrm{suc}|{m},{\bf w}). \end{eqnarray} \end{subequations} \begin{figure} \includegraphics[scale=0.33]{figure5.eps} \caption{(Color online) Average probability of success $\bar{P}_s( \mathrm{success})$ as a function of codeword length $s$, for $n=10$ and various values of $T$. The solid lines are numerical results obtained from Eqs. (\ref{P_s_av1}), whereas the dashed lines are for the upper bound defined in Eq. (\ref{Ubound}).} \label{fig5} \end{figure} In Fig. \ref{fig5}, $\bar{P}_s( \mathrm{suc})$ is depicted as a function of the codeword length $s$ for various numbers of public-key copies (solid lines). Clearly, the average probability of success decreases with increasing $s$ whereas this drop becomes slower and slower as we increase the number of public-key copies. For $T>1$ a rather tight upper bound for $\bar{P}_s( \mathrm{suc})$ is given by the expression \begin{eqnarray} \frac{1}2+\frac{1}2\left ( 1-\frac{1}{3T}\right )^s \label{Ubound} \end{eqnarray} which is also plotted in Fig. \ref{fig5} with dashed lines. A sketch of the proof of this upper bound is provided in Appendix \ref{app2}. Now, let us assume that the users participating in the protocol have agreed in advance on a security parameter $\varepsilon\ll 1$ so that Eve's probability of success $\bar{P}_s( \mathrm{suc})$ has to fulfill the relation $\bar{P}_s( \mathrm{suc})\leq 1/2+\varepsilon$. This implies that the message bit $m$ has to be encrypted in \begin{eqnarray} s\geq \left |\frac{1+\log_2(\varepsilon)}{\log_2\left ( \frac{3T-1}{3T}\right )}\right | \label{Lbound1} \end{eqnarray} qubits which is always fulfilled if \begin{eqnarray} s\geq 3T |1+\log_2(\varepsilon)|. \label{Lbound2} \end{eqnarray} \subsection{Comparison to the forward-search attack} \label{sec3c} The robustness of the present public-key encryption scheme against a forward-search attack based on a symmetry test in which Eve compares the cipher state with the public-key state is discussed in Ref. \cite{Nik08,NikIoa09}. The symmetry test of Ref. \cite{Nik08,NikIoa09} takes into account all the copies of the public keys but in contrast to the attacks discussed here it requires rather complicated quantum operations and gates, such as Fourier transformations and permutations on large numbers of qubits. Due to the nature of the attack the probability for successful eavesdropping does not vary from run to run and the probability for an eavesdropper to deduce the parity of the $s$-bit codeword and hence the message from the cipherstate is given by \cite{NikIoa09} \begin{eqnarray} \bar{P}_s(\textrm{suc})=\frac{1}2+\frac{1}2\left ( 1-\frac{1}{2T}\right )^s. \label{swap} \end{eqnarray} It is rather surprising how close this exact expression is to the upper bound (\ref{Ubound}), which is slightly below the optimal probability of success. For a given security threshold $\varepsilon$ the length of the codeword has to satisfy \begin{eqnarray} s\geq T|1+\log_2(\varepsilon)|. \label{Lbound3} \end{eqnarray} which differs from Eq. (\ref{Lbound2}) by a factor of three only. \subsection{A symmetry-test attack with projective measurements} \label{sec3d} In contrast to the previous attack we will consider here an attack which aims directly at the message rather than the private key and makes use of one copy of the public-key state and the cipherstate only. Eve pairs up the corresponding qubits of the public key and the cipher state i.e., the $j$th pair pertains to the $j$th qubits. The qubits of the $j$th pair are projected independently onto the same randomly chosen eigenbasis $\{\ket{0_{\varphi_j}}\},\ket{1_{\varphi_j}}\}$ where \begin{eqnarray} \ket{\zeta_{\varphi_j}}&=&(-1)^{\zeta}\cos\left (\frac{\varphi_j}{2} \right )\ket{0_z}+\sin\left (\frac{\varphi_j}{2}\right )\ket{1_z} \end{eqnarray} and $\varphi_j$ is uniformly distributed over $[0,2\pi)$. The probability of correct guessing either of the qubits is given by \begin{eqnarray} F(k_j \theta_{n},\varphi_j)\equiv|\olap{\psi_{k_j}(\theta_n)}{\zeta_{\varphi_j}}|^2=\cos^2\left ( \frac{k_j \theta_{n}-\varphi_j}2 \right).\phantom{a} \end{eqnarray} However, since for a fixed value of $k_j$ the angle $\varphi_j$ is chosen at random, we can introduce a new random variable $\omega_{j,n}\equiv k_j \theta_{n}-\varphi_j$ uniformly distributed over the interval $[0,2\pi)$. For later convenience let us also denote the number of wrong outcomes for the $j$th pair by $e_j$ with $0\leq e_j\leq 2$. As discussed in the last paragraph of Sec. \ref{sec2}, the question that Eve has to answer is whether the states of the qubits in the $j$th pair are parallel or antiparallel. She obtains the correct answer if the outcomes of the measurements on the corresponding two qubits are either both correct $(e_j=0)$ or both wrong $(e_j=2)$. Thus, the probability of success in a single run of this protocol is given by \begin{equation} P(\textrm{suc}|w_j,k_j)=[F(\omega_{j,n})]^2+[1-F(\omega_{j,n})]^2. \label{psU_1} \end{equation} If the one-bit message is encoded in the parity of an $s$-bit codeword which is subsequently encrypted on $s$ qubits, Eve's strategy succeeds provided the total number of incorrect outcomes $e=\sum_{j=1}^s e_j$ is an even integer (e.g., see Table \ref{tab:1} for $s=2$). The total probability of success in a single run can be obtained by means of an iteration of the form (\ref{Qi}), where $Q^{(s)}$ is a multivariable function, i.e., $Q^{(s)}(\omega_{1,n},\ldots \omega_{s,n})\equiv P_s(\textrm{suc}|{\bf k}, {\bf w})$). Hence, Eve's probability of success in getting the correct parity and thus the correct message consists of two parts pertaining to possible combinations of outcomes from a single pair and the remaining $s-1$ pairs. More precisely, the first term refers to the case where the overall result on $s-1$ pairs as well as the result on the single pair are correct whereas for the second term Eve has failed in both cases. Given that the probability $P_s(\textrm{suc}|{\bf k}, {\bf w})$ is a function of $s$ uncorrelated random variables $\omega_{j,n}$, its analysis for $s>2$ is rather cumbersome. Nevertheless, it is straightforward to obtain an analytic expression for the average probability of success $\bar{P}_s(\textrm{suc})$ by averaging over all possible keys and codewords which is equivalent to averaging over all possible combinations of $\{\omega_{s,j}\}$. Along the lines of Appendix \ref{app2} it can be proven that \begin{eqnarray} \bar{P}_s(\textrm{suc})=\frac{1}2+\frac{1}{2^{s+1}}. \label{ind_ps} \end{eqnarray} Again, the average probability of success drops exponentially with increasing values of $s$. In contrast to Eqs. (\ref{Ubound}) and (\ref{swap}), this expression does not depend on $T$ since the attack under consideration uses only one copy of the public key. It is, however, equivalent to the corresponding expression for the forward-search attack, i.e. Eq. (\ref{swap}) for $T=1$. Hence, for a given security threshold $\varepsilon$ the length of the codeword has to satisfy inequality (\ref{Lbound3}) for $T=1$. \begin{table}[t] \begin{center} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline public key & t,t & t,f & t,t & t,f & f,t & f,f & f,t & f,f\\ \hline cipher state & t,t & t,f & f,f & f,t & t,f & t,t & f,t & f,f\\ \hline $e_1$,$e_2$ & 0,0 & 0,2 & 1,1 & 1,1 & 1,1 & 1,1 & 2,0 & 2,2\\ \hline $e$ & 0 & 2 & 2 & 2 & 2 & 2 & 2 & 4\\ \hline \end{tabular} \caption{\label{tab:1} Encryption of a single bit, on the state of two qubits $(s=2)$. Possible combinations of true (t) and false (f) outcomes that lead to correct estimation of the message.} \end{center} \end{table} \section{Conclusions} We have analyzed the security of a quantum-public-key encryption (QPKE) scheme that relies on single-qubit rotations. For a given number of public keys the symmetry underlying the protocol has been shown to restrict considerably the information gain that an eavesdropper might gain on the private key. This result suggests that new more efficient QPKE schemes could rely on quantum one-way functions, which explore symmetries in the involved quantum states. It is also worth recalling here the pivotal role of symmetries in quantum-key-distribution protocols, as a result of which qudit-based protocols can tolerate higher error rates than qubit-based ones \cite{qudit}. The robustness of the protocol under consideration was mainly analyzed in the framework of an attack which takes into account all the public-key copies and is based on projective measurements on single qubits. As a main result it has been shown that the performance of this attack is comparable to the performance of optimal collective measurements \cite{Der98} as well as to the forward-search attack of \cite{NikIoa09} which involves rather complicated quantum operations. Variants of the attack are expected to be applicable to other types of QPKE schemes as well. \section*{Acknowledgements} This work is supported by CASED. We are grateful to Joe Renes for useful suggestions and discussions. \begin{appendix} \section{Properties of the density operator (\ref{rho_tau}).} \label{app1} As for the matrix elements of the density operator of Eq. \eqref{rho_tau}, we can distinguish two different cases: \\ {\em Case 1:} If $l+l^\prime$ is an even number, the function $f_{\tau,l}(k_j\theta_{n})f_{\tau,l^\prime}^\star(k_j \theta_{n})$ has even parity and does not change sign as we sum over all possible values of $k_j\in\mathbb{Z}_{2^n}$. Hence, we expect a non-zero contribution of $C_{l,l^\prime}$ in this case. {\em Case 2:} If $l+l^\prime$ is an odd number, the element $C_{l,l^\prime}$ vanishes since the parity of the overall trigonometric function in the sum is odd. Another important property of the density operator (\ref{rho_tau}) is that for fixed value of $\tau$ there seems to exist a critical value of $n$, let us say $n_{\rm c}$, for which it is $n$-independent for all $n\geq n_{\rm c}$. Furthermore, we have studied the rank of the density operator as well as the form of its eigenvalues for various values of $n$ and $\tau$. Our simulations show that for fixed $\tau$, $\rm{rank}[\rho_{j,\rm prior}^{(\tau)}]<\tau+1$ for all $n<n_{\rm c}$ and thus the density operator is singular, whereas for $n\geq n_{\rm c}$, $\rm{rank}[\rho_{j,\rm prior}^{(\tau)}]=\tau+1$. The von Neumann entropy of a quantum state is bounded from above by $\log_2(D)$ with $D$ denoting the dimension of the support of the relevant density operator. In view of the hermiticity of $\rho_{j,\rm prior}^{(\tau)}$ we have $D=\textrm{rank}[\rho_{j,\rm prior}^{(\tau)}]$ and thus for a given pair of $(\tau,n)$ the entropy of the density operator is bounded from above by the corresponding entropy for $(\tau,n_{\rm c})$. Hence, we arrive again at the upper bound for the entropy provided in (\ref{entropy_pri_1}). In order to obtain a tighter bound we can investigate eigenvalues of the density operator for $(\tau,n_{\rm c})$. Our simulations suggest that in this case the eigenvalues of (\ref{rho_tau}) are given by \begin{equation} \lambda_i=\frac{1}{2^\tau}\binom{\tau}{i}. \end{equation} So, $S[\rho_{j,\rm prior}^{(\tau)}]$ can be calculated as the entropy of the binomial distribution with mean $\tau/2$ and variance $\tau/4$. This entropy is bounded from above by the entropy of the the normal (Gaussian) distribution with the same mean and variance \cite{book2}. Thus, we obtain the result \begin{equation} \label{entropy_pri} S[\rho_{j,\rm prior}^{(\tau)}]\leq\frac{1}{2}\log_2(\tau)+\frac{1}{2}\log_2(\pi e/2) \end{equation} and this bound is below the one of (\ref{entropy_pri_1}). Accordingly, the information gain is upper bounded by \begin{equation} I_{\rm av}\leq \frac{1}{2}\log_2(\tau)+\frac{1}{2}\log_2(\pi e/2). \end{equation} \section{Proof of the upper bound (\ref{Ubound}).} \label{app2} The quantity we want to bound from above, i.e. $\bar{P}_s( \mathrm{suc})$, is a monotonously increasing function of $\bar{P}( \mathrm{suc}|w_j)$ for $\bar{P}( \mathrm{suc}|w_j)>1/2$. Thus, in view of (\ref{U_of_T}) we have \begin{eqnarray} \bar{P}_s( \mathrm{suc}) = \sum_{\substack{\alpha=0\\ {\rm even}}}^s \binom{s}{\alpha} [1-\bar{P}( \mathrm{suc}| w_j)]^{\alpha} [\bar{P}( \mathrm{suc} | w_j)]^{s-\alpha}\phantom{aa}\label{ap:eq1}\\ \leq \sum_{\substack{\alpha=0\\ {\rm even}}}^s \binom{s}{\alpha} [1-U(T)]^{\alpha} [U(T)]^{s-\alpha}.\phantom{aa} \label{ap:eq2} \end{eqnarray} Let us denote the r.h.s of inequality (\ref{ap:eq2}) by $Q^{(s)}(T)$. It can be shown by induction that $Q^{(s)}$ is equal to (\ref{Ubound}). To this end we note that $Q^{(s)}$ can be written alternatively in the form of an iteration, i.e. \begin{eqnarray} Q^{(s)}=Q^{(1)}Q^{(s-1)}+\left [1-Q^{(1)} \right ] \left [1-Q^{(s-1)}\right ]. \label{Qi} \end{eqnarray} For $s=1$ the equality we want to show holds, i.e. we have \begin{eqnarray} Q^{(1)} = U(T)=\frac{1}2-\frac{1}{2}\left ( 1-\frac{1}{3T}\right ) := \frac{1}2+\frac{\lambda}2. \end{eqnarray} Assuming that it holds for $s$, i.e. \begin{eqnarray} Q^{(s)}=\frac{1}2+\frac{\lambda^s}2, \label{ap:eq3} \end{eqnarray} we can prove also that it holds for $s+1$, because \begin{eqnarray} Q^{(s+1)} &=& \left(\frac{1}{2} + \frac{\lambda}{2}\right) \left(\frac{1}{2} + \frac{\lambda^s}{2}\right) + \nonumber\\ & &+\left(\frac{1}{2} - \frac{\lambda}{2}\right) \left(\frac{1}{2} - \frac{\lambda^s}{2}\right)\\ &=& \frac{1}{2} + \frac{\lambda^{s+1}}{2}. \end{eqnarray} \end{appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,596
What we learned from our top 3 tweets Q1 2018 Each business quarter, we look at our most popular tweets over the time period. It's a good opportunity to see what kind of content catches attention and to reflect on how we can best meet the needs of our followers and, of course, our customers and patients. Please bear in mind these "top" tweets are based on engagement numbers, as opposed to a straight-up number of eyeballs on them. In essence, these are tweets people interacted with by liking, replying, or retweeting. The first quarter of 2018 gave us compelling feedback as to how business news concerning our company attracts significant attention and engagement. The top tweet from Q1 2018 linked to a news release indicating that Express Scripts Canada had landed a five-year contract from Health Canada to process benefit claims for Canada's Indigenous people. Even without a hashtag, the news definitely got a good amount of attention and sharing among users. Read: Indigenous Health Issues in Canada The Health Information and Claims Processing contract involves millions of pharmacy, dental, vision, mental health and medical supplies/equipment claims. The awarding of the contract reflects Express Scripts Canada's competitive bid, which was comprehensive, compliant and innovative. Given that this contract is one of the largest of its kind in the country, it's no surprise that users shared the news. If we're looking for a trend as to why certain content captures engagement, this tweet shows us that practical, everyday health issues are a sure winner. Our "Ask the Pharmacist" series is building engagement momentum with every new episode, so it's great to see this offering from late February making some engagement noise. Its popularity speaks to the subject matter and the format: a quick, digestible video wherein an Express Scripts Canada pharmacist fields a common clinical question from a patient. For those who haven't exercised in a long time, it's critical to discuss any new plans with a health care provider before getting started. Pharmacists are well-placed to discuss how a new exercise regimen can affect one's medication. Read: Your new year's resolution to start working out could do more than just trim your waistline The three hashtags broaden the tweet's amplification, which partly explains its engagement success. Even still, there's usually an engagement story behind tweets that hone in on specific health-related questions that are answered succinctly in video format by an expert. Finally, our third-highest engagement metric from Q1 2018 came first thing in the new year with an article in Calgary's Child magazine written by Express Scripts Canada pharmacist Edwin Ho. Edwin offered tips and strategies on how children can ensure they are using prescription products safely and correctly while away from home. Covering topics such as the importance of open communication, proper storage, and safety, these practical matters speak directly to parents whose children use medication to manage a health condition. The takeaway is clear: practical, instructional content from a reliable source is usually quite share-worthy. See you again in Q2 2018!
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,604
Bullard: Fed is Awaiting More Data Before Taking Action Published September 01, 2012 FOXBusiness James Bullard: The Job Situation is Not Good Federal Reserve Bank of St. Louis President James Bullard in Jackson Hole, Wyo., on the outlook for job growth and inflation. (Updated with comments from Dennis Lockhart, president of the Federal Reserve Bank of Atlanta.) A top Federal Reserve official said Friday that while the central bank is leaning toward more stimulus for the economy, its policy is in "limbo" until members get more concrete data. "I think we had this long string of bad news, sort of, through the first or second quarter of the year. But I think that's turned around a little bit" since the last meeting of the Fed's policy-setting body, the Federal Open Market Committee, on July 31 and Aug. 1, said James Bullard, president of the Federal Reserve Bank of St. Louis. "So that's leaving the committee in a little bit of a limbo." "We do have an easing bias," he said in an interview with FOX Business at the annual central bankers' conference in Jackson Hole, Wyo., sponsored by the Federal Reserve Bank of Kansas City. "But whether to take action--or if we do, then exactly what action to take--I think those are all unclear." Since the last FOMC meeting, the government has reported stronger-than-expected job growth, 163,000 new positions in July; a slight upward revision to second-quarter economic growth, to 1.7%, and a pickup in personal income and spending, among other indicators. Economists consider the jobs report for August, to be released this Friday, the most important economic indicator for the next FOMC meeting on Sept. 12-13. The Fed has already announced at least one possible additional stimulus move for its agenda at that meeting: whether to extend its low-rate pledge for a longer period. Frustrated by the slow pace of the economic recovery and continuing high unemployment, the FOMC announced in January that it intended to keep short-term rates "exceptionally low" through late 2014. Previously, it had approved keeping them low through 2013. Supporters of this communications strategy, including Fed Chairman Ben Bernanke, hope it will help stimulate stronger economic growth. "To the extent we can communicate that interest rates will be lower for longer, that will ease financial conditions and be a way we can affect the state of the economy," Bernanke said at a press conference in January. The Fed wants "accommodative financial conditions," he said, "so that it's attractive to firms to invest and hire, attractive for those who are eligible to buy homes, and so on." In a note to clients, RBC Capital Markets predicted the FOMC will approve extending the so-called "forward guidance" next month: "In the end, changing the language is a low cost policy response so why wouldn't you like it?" The firm added that "come the September meeting, 2014 is likely to shift to 2015." Bullard suggested he would support a change. "I haven't liked the calendar date," he said. "But if we're going to have the calendar date, I think you do have to move it when the economic situation changes. And so, you know, that's where we are." Dennis Lockhart, president of the Federal Reserve Bank of Atlanta, was also open to a shift. "I do think the outlook is weak enough that certainly there could be a case for extending that," he said. "It will be based on the (economic) outlook and...so there is a certain degree of science involved...You have to base this on something and so we've put together, as scientifically as we can, a forecast. And if it indicates that the conditions for beginning to raise (interest) rates are not going to prevail as early as, say, 2014, then you sort of fall into that decision."
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,622
Add mechanisms to limit the subscribe per consumer to avoid high network bandwidth usage. By default, the limit mechanisms is close. Can enable by broker.conf and namespaces policy. Add mechanisms to limit consumer subscribe times in a period. times is over more than max subscribe times in a period.
{ "redpajama_set_name": "RedPajamaC4" }
8,942
{"url":"https:\/\/fivethirtyeight.com\/features\/how-fast-can-you-type-a-million-letters\/","text":"ABC News\nHow Fast Can You Type A Million Letters?\n\nWelcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. There are two types: Riddler Express for those of you who want something bite-size and Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either,1 and you may get a shoutout in next week\u2019s column. If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter.\n\n## Riddler Express\n\nFrom Roman Lee and George Yan, a problem in which nature calls:\n\nSome number, N, of people need to pee, and there is some number, M, of urinals in a row in a men\u2019s room. The people always follow a rule for which urinal they select: The first person goes to one on either far end of the row, and the rest try to maximize the number of urinals between them and any other person. So the second person will go on the other far end, the third person in the middle, and so on. They continue to occupy the urinals until one person would have to go directly next to another person, at which point that person decides not to go at all.\n\nWhat\u2019s the minimum number, M, of urinals required to accommodate all the N people at the same time?\n\n## Riddler Classic\n\nFrom Brendan Hill, an open-ended, experimental and dare-I-say creative optimization puzzle, on a topic about which I myself have often pondered in my idler moments:\n\nWhat is the fastest way to fill up a text editor with a string of 1 million of the same character? (Let\u2019s go with the letter \u201ci\u201d.)\n\nThere are a lot of variables here. You can type \u201ci\u2019s\u201d at a certain rate, maybe around five per second, by simply pressing its key repeatedly. You can also hold down the key, initially getting a single \u201ci,\u201d and then after a \u201crepeat delay\u201d of about half a second, getting a quickly repeating stream of \u201ci\u2019s\u201d at a \u201crepeat rate\u201d of about 30 per second. You can also use copy and paste. If you release the \u201ci\u201d key, you can hit Ctrl+A then Ctrl+C, then hit the right arrow key, and finally Ctrl+V, selecting all your text, copying it and pasting it to what you had already. (Replace Ctrl with Command if on a Mac, of course.) This process costs you about a second from \u201ci\u201d key release to initial depress of Ctrl+V. If you hold down Ctrl+V, there is the same repeat delay and repeat rate that then generates a bunch of copies of your clipboard very quickly.\n\nSo the questions are: How big should you make the original edition of your clipboard before you transition to the more efficient copy\/paste? Then, how long should you stick with that clipboard before going back to the Ctrl+A and growing your clipboard again?\n\n(There may be no single right answer here, but Bonus Riddler Points will be awarded for mathematical and empirical insight. Extra Special Bonus Riddler Points \u2014 and who knows, maybe a copy of a book \u2014 will be awarded for video evidence of the quickest 1 million \u201ci\u2019s\u201d to populate a solver\u2019s text editor.)\n\n## Solution to last week\u2019s Riddler Express\n\nCongratulations to \ud83d\udc4f Shawn Cooke \ud83d\udc4f of Glen Allen, Virginia, winner of last week\u2019s Riddler Express!\n\nSuppose you have N circles, all of which are joined so that their centers lie on a larger circle. What is the ratio of the diameter of the larger circle to the diameter of the smaller circles?\n\nThink of the centers of the circles as forming an n-gon. We can then draw the following figure, where $$r_1$$ is the radius of one of the smaller circles, $$r_2$$ is the radius of the larger circle, and $$\\theta$$ is the angle shown:2\n\nThe ratio is $$1\/\\sin(\\pi\/n)$$.\n\nWe can then calculate that $$\\theta = (2\\pi)\/(2n)$$ \u2014 there are $$2\\pi$$ radians in a circle and $$2n$$ angles like $$\\theta$$ that make up this bigger circle. We also know that $$\\sin \\theta = r_1\/r_2$$ \u2014 this is just the definition of sine. Equivalently, to get an answer in terms of the circles\u2019 diameters, as the puzzle asked for, $$\\sin \\theta = d_1\/d_2$$, where d is diameter. Finally, combining these mathematical facts, $$d_2\/d_1 = 1\/\\sin(\\pi\/n)$$, our answer!\n\n## Solution to last week\u2019s Riddler Classic\n\nCongratulations to \ud83d\udc4f Jared Nielsen \ud83d\udc4f of Provo, Utah, winner of last week\u2019s Riddler Classic!\n\nLast week, we found ourselves in the situation depicted below, on a gridded system of streets, standing at point A and wanting to meet a mutual friend at point B as quickly as possible. We couldn\u2019t jaywalk, could cross only one street at each intersection and didn\u2019t know anything about the timing of the lights at the other intersections, save for the fact that their cycles were the same length \u2014 T. What should we have done?\n\nDespite our northbound \u201cWalk\u201d signal, we probably should\u2019ve waited a second and walked east! Specifically, as long as the length of the signals\u2019 cycles is at least 4 seconds, we do better going east. Pedestrian discretion is the better part of pedestrian valor.\n\nWhy? The solution has to do with the trade-off between a second saved and crossing options earned. The details of the rest of the solution are adapted from this puzzle\u2019s submitter, Ben Wiener.\n\nLet\u2019s say, for the sake of comparison, that the two of us split up. You go north immediately while I wait for a second and then go east. Here\u2019s an example of what might happen: At your next intersection, you can only go east, so you have no choice but to wait 3 seconds for the light to change. The intersection after that, you need to go east again. You wait an excruciating five seconds. I, on the other hand, lost one second waiting for the light to change \u2014 but I\u2019m not worried. As I arrive at the next intersection, I smile to myself and immediately walk whichever way the light allows \u2014 east in this case. At the next intersection, I wait a modest four seconds for the light to allow me to go north. One block later, I arrive first at the intersection, three precious seconds before you do. I high-five our mutual friend. You miss out.\n\nWhat happened here? The only signal we know about is the one we\u2019re standing at. We know that we can go north immediately, but we have to wait one second to go east, so call that $$\\tau_{north}=0$$ seconds and $$\\tau_{east}=1$$ second. We don\u2019t know anything about the other lights, but we can try to figure out some average behavior. First, let\u2019s label the intersections as shown below:\n\nTo find the average wait time, let\u2019s plot out the system. Say each walk signal cycle lasts a time T. The x-axis on the plot below is the time you arrive at the light, marked with the moments when the signals change. The y-axis is the resulting wait time to go in each direction. If you know nothing about the state of an intersection and you want to go east, you could get lucky and arrive when the light allows you to go east. But you could also arrive and have to wait T seconds. There is a 1-in-2 chance of catching a walk signal. There is an average wait time of T\/2 if you catch a Don\u2019t Walk signal.3 Overall, the average time you expect to wait is $$\\langle t \\rangle=T\/4$$.\n\nWe can apply this to the intersections in question. At the intersection (1,0), we must go east. We could get lucky and be able to go immediately, but we expect to wait an average of T\/4. We\u2019ll denote this $$\\langle t_{(1,0)} \\rangle=T\/4$$. The situation at (0,1) is the same: $$\\langle t_{(0,1)} \\rangle=T\/4$$. At (2,0), you have to wait the same average time as at (1,0) twice, for a total wait time of $$\\langle t_{(2,0)} \\rangle=2T\/4$$ from (2,0) to the finish line.\n\nThings are different at (1,1). Here, you get to make a decision. You can immediately go to either (0,1) or (1,0) \u2014 whichever the light allows. Because you get to choose, the intersection at (1,1) is free. That means a total wait time of only $$\\langle t_{(1,1)} \\rangle=T\/4$$ from here. That\u2019s the value of having multiple options!\n\nOur problem is almost solved. At (2,1), we know our options now. We can immediately go north to (2,0), where we expect to wait T\/2, or we can wait one second and go to (1,1), where we expect to wait T\/4. In other words, $$\\langle t_{north} \\rangle=T\/2$$ and $$\\langle t_{east} \\rangle=(T\/4) + 1$$. As long as T is longer than 4 seconds, you should go east. In general, you should go east unless your initial wait time is big relative to the lights\u2019 cycle, that is $$\\tau_{east}>T\/4$$.\n\n## Want to submit a riddle?\n\nEmail me at oliver.roeder@fivethirtyeight.com.\n\n## Footnotes\n\n1. Important small print: For you to be eligible, I need to receive your correct answer before 11:59 p.m. Eastern time on Sunday. Have a great weekend!\n\n2. A 12-sided n-gon is pictured, but this could be extended to any number of sides.\n\n3. The average is T\/2 because the time you have to wait, if you have to wait, is uniformly distributed between 0 and T.\n\nOliver Roeder was a senior writer for FiveThirtyEight. He holds a Ph.D. in economics from the University of Texas at Austin, where he studied game theory and political competition.","date":"2022-10-06 01:46:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4716533422470093, \"perplexity\": 846.4933977360208}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030337680.35\/warc\/CC-MAIN-20221005234659-20221006024659-00580.warc.gz\"}"}
null
null
Q: Customizing input validation in swagger ui We are exposing swagger-ui from our webserver using the Swashbuckle package working on top of Asp.Net Core. We are hitting an input validation issue for our Guid input fields. The GUIDs we are pasting in, which are read from other parts of the system, are formatted as deb83f8a3edc4b78a2ece3321f81b58b, note the missing dashes. The input validation rejects this as it expects dashes in the format (so it accepts deb83f8a-3edc-4b78-a2ec-e3321f81b58b). The swagger document that we serve has the parameter as type: string and format: uuid. It calls to some internal validationGuid call that have a reg-ex that forces the dashes. From the browser Console it seems like it is looking for a component called JsonSchema_string_uuid but is not finding it. So my question is how can I extend swagger-ui to override the validation of specific parameter type/formats? UPDATE: I was made aware of the RFC that specifies a UUID as containing dashes and a workaround. However, I'm still interested in learning about ways to customize the validation of both custom formats and specifically uuid? A: While the OpenAPI Specification and JSON Schema do not currently define format: uuid, RFC 4122 defines UUID as containing dashes, and some comments in the OpenAPI repository suggest that format: uuid is supposed to follow RFC 4122. This means your example without dashes is most likely not format: uuid. Consider replacing format: uuid with pattern: '^[A-Fa-f0-9]{32}$'.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,922
Q: How do I boot to console with Refind? I am using Refind to boot into Ubuntu 13.04 on a Macbook Pro (8,2). However, I seem to have messed up my install to the point where Ubuntu no longer boots (I get a flashing underscore but no text entry abilities after a brief flash of the Ubuntu graphic splash screen). Anyway, I'd like to boot into the console or recovery mode or something in order to diagnose/fix the issue. But with Refind, the only options I am given do not seem to work. Low Graphics mode almost works, except I don't have a mouse and no keyboard controls seem to work, so I can't click through options. The other options never actually drop me at a prompt. (I tagged the question with Refit, but I am actually using Refind). A: If you're booting kernels directly with rEFInd, you can hit F2 or Insert to see a list of boot options laid out in the /boot/refind_linux.conf file. If none of them work, you can highlight one and hit F2 or Insert again to open a line editor that enables you to edit the options passed to the kernel. If you're booting from rEFInd into another boot loader, such as GRUB, you'll need to use that boot loader to pass options to the kernel. In GRUB, you'd do this by selecting the kernel from the menu and hitting the "e" key to edit the options. Either way, I can't say precisely what options you should pass, because it's not clear what's wrong with your system or what options might help. As a starting point, single will boot into single-user mode, which should disable the GUI and enable you to fix things if you're sufficiently skilled with text-mode commands. (This is one of the options that rEFInd will probably provide by default, although this depends on the /boot/refind_linux.conf file.) The nomodeset option will get past some common video problems. If you need more help, I recommend you post more details, such as precisely what is and is not working.
{ "redpajama_set_name": "RedPajamaStackExchange" }
713
West Ham are desperate to add attacking signings to their squad next season, but have so far been frustrated in their attempts to lure the likes of Michy Batshuayi and Alexandre Lacazette to the club. The latest reports indicate they may have shifted targets and they may be more successful this time around. With Batshuayi joining Chelsea and Lacazette rejecting a switch to the Hammers, Gomez may be a more realistic candidate to improve the spearhead of West Ham's attack. Gomez is still officially a Fiorentina player, although he spent last season on loan at Besiktas. Besiktas are desperate to sign Gomez permanently though after he scored 27 goals in 33 games during his loan spell. It is this form which earned his place in Germany's international squad for Euro 2016. Gomez is a seasoned international, scoring 29 goals from 68 caps. Despite playing in the Turkish league, a division that can be considered easier to score in, Gomez' goalscoring rate was phenomenal last season, scoring 0.93 goals per 90 minutes. On top of the 27 goals, he notched up 4 assists and made 23 key passes that season, statistics that highlight his ability to also be creative as well as score goals. Gomez is certainly not the most mobile striker though and much like they do when Andy Carroll plays, if West Ham were to stick Gomez up front, they will need to play to his strengths in order to get the best out of the German. Whether Gomez will fit in England is a gamble, if you visit this website you can find out more about this. Gomez would also be an awful lot cheaper than the other striking options West Ham have attempted to make so far this summer, with a valuation of around £5million mooted. This could allow more budget for Slaven Bilic to spend elsewhere. West Ham are also favourites to sign Yannick Bolasie, although Manchester United and Liverpool are also interested according to The Star. Bolasie could be available this summer with the arrival of Andros Townsend at Crystal Palace already providing the club with a replacement. Bolasie, 27 has previously been a target for Tottenham and it is thought that the player would like to move to a club with more of a realistic chance of finishing in the top four next season. Despite Pardew's ambition, it is likely Bolasie will push for a move this summer and West Ham could be the ideal destination with the player thought to be keen to stay in London. Bolasie's strengths are clear. He managed an amazing 3.25 successful take ons per 90 minutes last season and that pace and directness will certainly be helpful for the Hammers who sometimes lacked enough attacking quality last season. Bolasie scores on average one goal per 5 games, which means he is a decent goalscoring option, but not the most prolific. He would play out wide for West Ham if he signed and probably start for the Hammers, something that Liverpool or Manchester United are unlikely to offer the 27-year-old.
{ "redpajama_set_name": "RedPajamaC4" }
9,148