text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
To all new and existing clients: We are open and operating remotely. Call us at 1-800-LAW-2000. Free video consultations! A Few Words from Our Clients Our Video Library Samuel Davis Esq. Marc C. Saperstein Esq. Garry R. Salomon Esq. Steven Benvenisti Esq. Paul A. Garfield Esq. Adam B. Lederman Esq. See All Our Attorneys Nursing Home Malpractice Shopping Mall Accident Claims Free Consultation: Your Name Email Phone Start Your Free Case Review Home Blog Five Ways to Prevent Drowsy Driving Accidents Five Ways to Prevent Drowsy Driving Accidents Davis, Saperstein & Salomon, P.C. | August 28, 2015 | Car Accidents | Drowsy, or fatigued, driving is sometimes compared to driving while impaired by alcohol. This is because sleepiness slows a person's reaction time. It decreases awareness, impairs judgment and increases the probability of the drowsy driver getting into a serious car accident. In New Jersey, a drowsy driver who gets into an accident may be considered by law to be driving recklessly in the same manner as an intoxicated driver. In fact, our state's vehicular homicide law, N.J. Rev. Stat § 2C:11-5 (2013), addresses drowsy driving first – before drunk, drugged or distracted driving. Under the law, "Proof that the defendant fell asleep while driving or was driving after having been without sleep for a period in excess of 24 consecutive hours may give rise to an inference that the defendant was driving recklessly." In 2003, then-New Jersey Governor James E. McGreevey signed the law that allows prosecutors to charge a sleep-deprived driver with vehicular homicide. You may remember that it was called "Maggie's Law," for 20-year-old Maggie McDonnell, a college student who died in 1997 after a vehicle driven by Michael Coleman swerved across three lanes and hit her car. Coleman told authorities he had not slept for 30 hours. As The Associated Press explained at the time, Coleman was ultimately cited for reckless driving and fined $200 — the maximum sentence. Vehicular homicide is punishable by up to 10 years in prison and a $100,000 fine. Who Is Most Likely to Cause a Drowsy Driving Crash? According to the National Highway Traffic Safety Administration, drowsy driving caused about 72,000 crashes, 800 fatalities and 44,000 injuries in one recent year. However, it is likely that many more drowsy driving crashes occurred. The NHTSA says that fatigued driving is underreported as a cause of crashes. As you might expect, the National Sleep Foundation (NSF) has an intense interest in drowsy driving. The comparison of drowsiness to alcohol impairment that we mentioned above actually comes from the NSF's DrowsyDriving.org website. The NSF states that people who are most at risk of driving while fatigued and crashing are: Young men up to age 26 Shift workers (especially those who work night shifts or long shifts) Commercial drivers (particularly truck and bus drivers) People with sleep disorders or with short-term or chronic sleep deprivation. Drivers traveling long distances without proper rest breaks Drivers traveling alone or on long, rural, dark or boring roads. Six or fewer hours of sleep in a 24-hour period triples your risk of a crash, according to the NSF. Ways to Avoid Fatigue and a Potential Drowsy Driving Accident You don't have to make the potentially fatal mistake of driving while you are too fatigued or drowsy to be behind the wheel. Instead, you can do following these five rules: 1. Get adequate sleep. 2. Stop and rest. If you are making a trip, take a break about every 100 miles or every two hours. Anytime you feel tired, find a rest stop or somewhere else safe to pull over and rest. A nap of 15 to 20 minutes can provide a short-term boost in alertness and performance. Caffeine can also provide a short-term stimulus. A study at the University of Washington found that the crash risk was less for drivers who used a highway rest stop, drank coffee within the previous two hours or played a radio while driving. But napping and coffee are not substitutes for a full night's sleep. 3. Don't ride alone. A companion can help to keep you awake by engaging you in conversation or sharing driving duties. Talking to a companion breaks the monotony of a long drive or one at night. A passenger can also take over or urge you to stop for a rest if they recognize warning signs of your fatigue such as: Yawning repeatedly Rubbing your eyes Difficulty holding your head up Drifting from your lane. 4. Avoid alcohol and sedating medications. Don't actively increase your drowsiness! Check with your doctor or read information provided with your medication about any side effects that could cause drowsiness. Don't forget about over-the-counter (OTC) medications and their potential effect on driving. Among the common types that can cause drowsiness are: Analgesics / nonsteroidal anti-inflammatory drugs (NSAIDs) Antimotility agents (used to alleviate the symptoms of diarrhea). 5. Don't drive. It is really that simple. If you can avoid a trip or delay it until you get rest, do so. If it is a business meeting, maybe it can be conducted by Skype, a Google Hangout or another video chat or conference program. Driving is not always the answer, especially when you have not had proper sleep. Defective Products & Medical Devices Drugged Driving Accidents Disclaimer: Pursuant to the Rules of Professional Conduct as promulgated by the Supreme Court of New Jersey, lawyers who promote themselves or their firms are required to state the following: "No aspect of this advertisement has been approved by the Supreme Court of New Jersey." This disclaimer applies to all the attorney rating agencies and organizations listed. The awards and recognition may apply to some or all of the firm's attorney. For further information about a firm's attorney, kindly reference their respective biographies. Some or all attorney rating agencies may require payment of one-time or annual fees. For further information as to their internal selection and rating criteria, please click on the respective links. We Understand That Trust Is Earned (800) 529-2000 | Live ChatLive Chat (800) 529-2000 | Live Chat Teaneck, New Jersey 07666 Hours | Directions 800 Inman Ave Colonia, New Jersey 07067 50 Tice Blvd #340 Woodcliff Lake, New Jersey 07677 (By Appointment Only): Additional Communities we serve (By Appointment Only): Jersey City Newark Princeton Colonia Iselin East Rutherford Freehold Bridgewater Woodcliff Lake © 2021 Davis, Saperstein & Salomon, P.C. All Rights Reserved Site by Consultwebs.com
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,516
Clovis Charles Victor prince de Hohenlohe-Schillingsfürst (en allemand : Chlodwig Carl Viktor Fürst zu Hohenlohe-Schillingsfürst, Prinz von Ratibor und von Corvey), né le à Rotenburg an der Fulda et mort le à Ragaz, était un homme d'État allemand. Il fut notamment, ministre-président de Bavière à Munich, vice-président du Reichstag à Berlin, ambassadeur d'Allemagne à Paris, Statthalter d'Alsace-Lorraine à Strasbourg, et chancelier impérial. Biographie Une famille spoliée Clovis de Hohenlohe-Schillingsfürst est un membre de la Maison de Hohenlohe qui régna sur la principauté de Hohenlohe-Schillingsfürst, un des quatre cents principicules souverains du Saint Empire Romain Germanique. La conquête de l'Allemagne par l'empereur des Français Napoléon provoque la chute de l'empire quasi millénaire et la réorganisation du territoire. Napoléon I institue la Confédération du Rhin dont il se proclame le protecteur. Un grand nombre de principautés sont "médiatisées" le . Tel est le sort de la principauté de Hohenlohe-Schillingsfürst, qui est réunie de force à la Bavière, laquelle est alors érigée en royaume (la fille aînée du roi de Bavière épousant le fils adoptif de l'empereur des Français). À titre de compensation, les Hohenlohe, comme leurs compagnons d'infortune, bénéficient d'un siège héréditaire à la Chambre haute du Royaume de Bavière nouvellement créé et conservent le rang et les prérogatives des princes souverains. Dans de telles conditions, les Hohenlohe-Schillingsfürst se sentent fort peu bavarois. Victor (1818-1893), le frère aîné de Clovis, préfère entrer au service de la Prusse, où il reçoit, grâce à son oncle, le landgrave Victor-Amédée de Hesse-Rheinfels-Rotenbourg, les titres prussiens de duc de Ratibor et de prince de Corvey à condition de renoncer à ses titres bavarois. Il y renonce sans peine et Clovis en est pourvu et relève le titre. Un catholicisme de façade Le catholicisme de Clovis et de ses frères est également fort tiède du fait que leur mère est protestante ; le troisième d'entre eux, Gustave-Adolphe (1823-1896), au prénom caractéristique, a beau entrer dans les ordres et même devenir cardinal, son opposition aux Jésuites et son hostilité au dogme de l'infaillibilité pontificale le font tomber en disgrâce auprès de Pie IX sans que Léon XIII lui rende sa faveur. « Mon père, bien qu'il fût croyant à sa façon… » écrit Alexandre de Hohenlohe-Schillingsfürst, à propos des convictions religieuses de son père. Après des études de Droit, Clovis commence lui aussi une carrière prussienne, il revient à Munich où il travaille pour le roi de Prusse, ce qui lui est, dans son cas, très profitable. Vie de famille Clovis épouse Marie (1819-1897), fille de , qui lui apporte en héritage ses terres russes et son Château de Mir et qui possède aussi le château de Kerléon au Relecq-Kerhuon dans le Finistère (France). Ils ont au moins une fille et trois fils : Elisabeth née en 1847, (aîné) né en 1853 , Alexandre (1862-1924), . Ministre-Président de Bavière Après la défaite de Sadowa (1866), Louis II de Bavière est contraint d'appeler Hohenlohe au ministère de Bavière. Il devient ministre-président et œuvra en faveur de l'unité allemande sous la direction de la Prusse. Alexandre de Hohenlohe prétend que son père n'oublia jamais de défendre les intérêts de « sa petite patrie, la Bavière », mais on a bien du mal à le croire. L'opposition du chancelier au dogme de l'infaillibilité pontificale (1870) provoque chez les électeurs une réaction cléricale. Mis en minorité à la Chambre basse, il doit démissionner. Mais l'année même éclate la guerre franco-prussienne dont la conséquence est l'unité allemande ; pour prix de ses services, Hohenlohe reçoit du nouvel empereur allemand Guillaume d'abord une vice-présidence du nouveau Reichstag impérial, puis en 1874 est nommé ambassadeur à Paris. Ambassadeur à Paris Un tel poste, un des plus enviés, est une consécration pour cet homme du monde accompli. La mission est certes difficile au lendemain de la Guerre de 1870 mais il sait s'en tirer habilement. Il serait volontiers resté à cette place mais Bismarck, sachant toute la confiance qu'il peut avoir en lui, l'appelle à un autre poste délicat : le statthaltérat (gouvernorat général) d'Alsace-Lorraine. Il y travaille avec conscience et méthode mais ne réussit pas à gagner la confiance de ses administrés qui regrettent son prédécesseur, Manteuffel. En 1894, à l'âge de soixante-quinze ans, il est appelé par Guillaume II à la chancellerie après la chute de Caprivi. Chancelier d'Empire Le choix du souverain semble présenter bien des avantages : Hohenlohe est d'une loyauté absolue vis-à-vis des Hohenzollern, il est l'ami personnel de Bismarck et ainsi la presse bismarckienne, qui s'est déchaînée contre Caprivi, n'ose peut-être pas se montrer aussi violente à l'égard de son successeur. Enfin les junkers de Prusse, avec lesquels il faut compter, manifesteraient plus de respect envers le représentant d'une illustre famille. Mais il s'agit d'une charge bien lourde pour un homme déjà âgé et fatigué, dont la politique est, de surcroît, gênée par la fougue du jeune empereur. Il réussit à tenir six ans pendant lesquels il tente de tempérer les actions impériales, mais finit par tomber en disgrâce à son tour. Il se retire le et meurt moins d'un an après. Un « prince fonctionnaire » , d'autres ont vu en lui un fonctionnaire modèle. Il était en effet capable de faire avec conscience un travail quelque peu routinier. Pendant son ambassade de Paris, il est tout à fait à sa place, sachant donner de grands dîners auxquels Adolphe Thiers, une fois dans l'opposition, se rend. Il est chargé par Bismarck d'empêcher une restauration monarchique en France, à laquelle son prédécesseur Arnim a travaillé en sens contraire, suivant en cela les souhaits de l'empereur ; mais il n'a pas à intriguer beaucoup à ce sujet, le « comte de Chambord » et les députés ne parvenant pas à un accord sur la question du drapeau national. Quand Bismarck l'appelle à Berlin, en 1880, pour assurer l'intérim des Affaires étrangères, Hohenlohe ne peut supporter un travail aussi considérable, tombe malade et doit rentrer à Paris. Décorations Ordre de l'Aigle noir Articles connexes Régime personnel Ensemble du château de Mir Notes Bibliographie Clovis de Hohenlohe-Schillingsfürst, Mémoires, Éditions Louis Conard, 17 boulevard de la Madeleine, Paris, 1909, 404 pages. Winfried Baumgart: Chlodwig zu Hohenlohe-Schillingsfürst. In: (Hrsg.): Die deutschen Kanzler. Von Bismarck bis Kohl. 2. Auflage. Berlin 1998, S. 55–67. Heinz Gollwitzer: Die Standesherren. 2. Auflage, Vandenhoeck & Ruprecht, Göttingen 1964. Liens externes Ministre-président de Prusse Ministre-président de la Bavière Ministre des Affaires étrangères de l'Empire allemand Ministre des Affaires étrangères de la Bavière Statthalter d'Alsace-Lorraine Membre du Zollparlament Député du Reichstag (Empire allemand) Membre du Reichsrat bavarois Personnalité du Parti libéral impérial Clovis Chancelier de l'Empire allemand Membre d'honneur de l'Académie royale des sciences de Prusse Docteur honoris causa de l'université de Wurtzbourg Chevalier de l'ordre autrichien de la Toison d'Or (XIXe siècle) Ambassadeur d'Allemagne en France Louis II (roi de Bavière) Naissance en mars 1819 Naissance à Rotenburg an der Fulda Décès en juillet 1901 Décès dans le canton de Saint-Gall Décès à 82 ans
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,299
DA: Driver in Cape crash that killed new father was let go after prior arrest Byron Barnett BARNSTABLE, MASS. (WHDH) - A man involved in a crash on Cape Cod over the weekend that left a new father and Marine veteran dead should not have been released following an earlier drunken driving arrest, Cape and Islands District Attorney Michael O'Keefe said. Mickey Rivera, 22, was being pursued by police for driving erratically early Saturday morning on Route 28 in Cotuit when authorities said his vehicle crashed head-on into an SUV driven by 32-year-old Kevin Quinn, of Mashpee, who was returning home from a hospital after visiting with his wife and newborn daughter. Mickey Rivera Rivera and Quinn were both killed in the crash. Jocelyn Goyette, 24, was riding in Rivera's car. She was taken to an area hospital with serious injuries, where she died Monday. "He was a very loving, fun and awesome husband," Kevin's mother, Janet Quinn said. "He was so proud to be a dad." O'Keefe admitted that Rivera's drunken driving case was handled improperly by an inexperienced prosecutor who had only been on the job for a month. The prosecutor, whose name was not released, allowed Rivera to go free on personal recognizance last year. At the time, Rivera had been out on bail from charges related to a fatal shooting in 2015. A more experienced prosecutor would have requested the judge to order Rivera held behind bars, according to O'Keefe. Quinn's distraught family has promised to keep his legacy alive. Kevin Quinn "I just can't tell how were overwhelmed with the amount of support from the community," Janet Quinn said. A GoFundMe has been set up for Quinn's family. The fundraiser raised more $200,000 in two days. In addition to monetary donations, the family is collecting diapers and other newborn essentials.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,420
use net_stream::NetStream; use mio::{ EventLoop, EventLoopSender, EventLoopConfig, Handler, NonBlock, IoWriter, IoReader, IoAcceptor, Buf, MutBuf, Timeout, MioResult}; pub use mio::Token; use mio::net::{SockAddr, Socket}; use mio::net::tcp::{TcpAcceptor, TcpSocket}; use mio::util::Slab; use mio::event; use iobuf::{Iobuf, RWIobuf, AROIobuf, Allocator, AppendBuf}; use std::old_io::net::addrinfo::get_host_addresses; use std::result::Result; use std::sync::Arc; use std::sync::mpsc::{Receiver,SyncSender, sync_channel}; use std::time::Duration; use collections::dlist::DList; use reactive::Subscriber; use protocol::Protocol; /// The basic sendable buffer which also contains /// its own addressing. When the buffer is received, /// Token reflects the socket whence it came, /// when it is sent, Token is the socket out of which /// the buffer should be sent #[derive(Show)] pub struct StreamBuf (pub AROIobuf, pub Token); #[derive(Show)] pub struct ProtoMsg<T> (pub T, pub Token); unsafe impl Send for StreamBuf {} impl Clone for StreamBuf { fn clone(&self) -> StreamBuf { StreamBuf (self.0.clone(), self.1) } } struct ReadBuf (AppendBuf<'static>); pub type TimerCB<'a> = FnMut(&mut Reactor)->bool + 'a; pub type Reactor = EventLoop<Token, StreamBuf>; pub type Sender = EventLoopSender<StreamBuf>; impl Buf for StreamBuf { fn remaining(&self) -> usize { self.0.len() as usize } fn bytes<'a>(&'a self) -> &'a [u8] { unsafe { self.0.as_window_slice() } } fn advance(&mut self, cnt: usize) { self.0.advance(cnt as u32).unwrap(); } } impl Buf for ReadBuf { fn remaining(&self) -> usize { self.0.len() as usize } fn bytes<'b>(&'b self) -> &'b [u8] { self.0.as_window_slice() } fn advance(&mut self, cnt: usize) { self.0.advance(cnt as u32).unwrap(); } } impl MutBuf for ReadBuf { fn mut_bytes<'b>(&'b mut self) -> &'b mut [u8] { self.0.as_mut_window_slice() } } struct Connection<T> where T : Protocol, <T as Protocol>::Output : Send { sock: TcpSocket, outbuf: DList<StreamBuf>, interest: event::Interest, conn_tx: SyncSender<ProtoMsg<<T as Protocol>::Output>>, marker: u32, proto: T, buf: ReadBuf } impl<T> Connection<T> where T : Protocol, <T as Protocol>::Output : Send { pub fn new(s: TcpSocket, tx: SyncSender<ProtoMsg<<T as Protocol>::Output>>, rbuf: ReadBuf) -> Connection<T> { Connection { sock: s, outbuf: DList::new(), interest: event::HUP, conn_tx: tx, marker: 0, proto: <T as Protocol>::new(), buf: rbuf } } fn drain_write_queue_to_socket(&mut self) -> usize { let mut writable = true; while writable && self.outbuf.len() > 0 { let (result, sz) = { let buf = self.outbuf.front_mut().unwrap(); //shouldn't panic because of len() check let sz = buf.0.len(); (self.sock.write(buf), sz as usize) }; match result { Ok(NonBlock::Ready(n)) => { debug!("Wrote {:?} out of {:?} bytes to socket", n, sz); if n == sz { self.outbuf.pop_front(); // we have written the contents of this buffer so lets get rid of it } }, Ok(NonBlock::WouldBlock) => { // this is also very unlikely, we got a writable message, but failed // to write anything at all. debug!("Got Writable event for socket, but failed to write any bytes"); writable = false; }, Err(e) => { error!("error writing to socket: {:?}", e); writable = false } } } self.outbuf.len() } fn read(&mut self) -> MioResult<NonBlock<usize>> { self.sock.read(&mut self.buf) } } /// Configuration for the Net Engine /// queue_size: All queues, both inbound and outbound /// read_buf_sz: The size of the read buffer allocatod pub struct NetEngineConfig { queue_size: usize, read_buf_sz: usize, min_read_buf_sz: usize, max_connections: usize, poll_timeout_ms: usize, allocator: Option<Arc<Box<Allocator>>> } pub struct NetEngine<'a, T> where T : Protocol, <T as Protocol>::Output : Send { inner: EngineInner<'a, T>, event_loop: Reactor } impl<'a, T> NetEngine<'a, T> where T : Protocol, <T as Protocol>::Output : Send { /// Construct a new NetEngine with (hopefully) intelligent defaults /// pub fn new() -> NetEngine<'a, T> { let config = NetEngineConfig { queue_size: 524288, read_buf_sz: 1536, min_read_buf_sz: 64, allocator: None, max_connections: 10240, poll_timeout_ms: 100 }; NetEngine::configured(config) } /// Construct a new engine with defaults specified by the user pub fn configured(cfg: NetEngineConfig) -> NetEngine<'a, T> { NetEngine { event_loop: EventLoop::configured(NetEngine::<'a, T>::event_loop_config(cfg.queue_size, cfg.poll_timeout_ms)).unwrap(), inner: EngineInner::new(cfg) } } fn event_loop_config(queue_sz : usize, timeout: usize) -> EventLoopConfig { EventLoopConfig { io_poll_timeout_ms: timeout, notify_capacity: queue_sz, messages_per_tick: 512, timer_tick_ms: 100, timer_wheel_size: 1_024, timer_capacity: 65_536, } } /// connect to the supplied hostname and port /// any data that arrives on the connection will be put into a Buf /// and sent down the supplied Sender channel along with the Token of the connection pub fn connect<'b>(&mut self, hostname: &str, port: usize) -> Result<NetStream<'b, <T as Protocol>::Output>, String> { self.inner.connect(hostname, port, &mut self.event_loop) } /// listen on the supplied ip address and port /// any new connections will be accepted and polled for read events /// all datagrams that arrive will be put into StreamBufs with their /// corresponding token, and added to the default outbound data queue /// this can be called multiple times for different ips/ports pub fn listen<'b>(&mut self, addr: &'b str, port: usize) -> Result<Receiver<ProtoMsg<<T as Protocol>::Output>>, String> { self.inner.listen(addr, port, &mut self.event_loop) } /// fetch the event_loop channel for notifying the event_loop of new outbound data pub fn channel(&self) -> EventLoopSender<StreamBuf> { self.event_loop.channel() } /// Set a timeout to be executed by the event loop after duration /// Minimum expected resolution is the tick duration of the event loop /// poller, but it could be shorted depending on how many events are /// occurring pub fn timeout(&mut self, timeout: Duration, callback: Box<TimerCB<'a>>) { let tok = self.inner.timeouts.insert((callback, None)).map_err(|_|()).unwrap(); let handle = self.event_loop.timeout(tok, timeout).unwrap(); self.inner.timeouts.get_mut(tok).unwrap().1 = Some(handle); } /// process all incoming and outgoing events in a loop pub fn run(mut self) { self.event_loop.run(self.inner).map_err(|_| ()).unwrap(); } /// process all incoming and outgoing events in a loop pub fn run_once(mut self) { self.event_loop.run_once(self.inner).map_err(|_| ()).unwrap(); } /// calculates the 11th digit of pi pub fn shutdown(mut self) { self.event_loop.shutdown(); } } struct EngineInner<'a, T> where T : Protocol, <T as Protocol>::Output : Send { listeners: Slab<(TcpAcceptor, SyncSender<ProtoMsg<<T as Protocol>::Output>>)>, timeouts: Slab<(Box<TimerCB<'a>>, Option<Timeout>)>, conns: Slab<Connection<T>>, config: NetEngineConfig, } impl<'a, T> EngineInner<'a, T> where T : Protocol, <T as Protocol>::Output : Send { pub fn new(cfg: NetEngineConfig) -> EngineInner<'a, T> { EngineInner { listeners: Slab::new_starting_at(Token(0), 128), timeouts: Slab::new_starting_at(Token(129), 255), conns: Slab::new_starting_at(Token(256), cfg.max_connections + 256), config: cfg } } pub fn connect<'b>(&mut self, hostname: &str, port: usize, event_loop: &mut Reactor) -> Result<NetStream<'b, <T as Protocol>::Output>, String> { let ip = get_host_addresses(hostname).unwrap()[0]; //TODO manage receiving multiple IPs per hostname, random sample or something match TcpSocket::v4() { Ok(s) => { //s.set_tcp_nodelay(true); TODO: re-add to mio let (tx, rx) = sync_channel(self.config.queue_size); let buf = new_buf(self.config.read_buf_sz, self.config.allocator.clone()); match self.conns.insert(Connection::new(s, tx, buf)) { Ok(tok) => match event_loop.register_opt(&self.conns.get(tok).unwrap().sock, tok, event::READABLE, event::PollOpt::edge()) { Ok(..) => match self.conns.get(tok).unwrap().sock.connect(&SockAddr::InetAddr(ip, port as u16)) { Ok(..) => { debug!("Connected to server for token {:?}", tok); Ok(NetStream::new(tok, rx, event_loop.channel().clone())) } Err(e) => Err(format!("Failed to connect to {:?}:{:?}, error: {:?}", hostname, port, e)) }, Err(e) => Err(format!("Failed to register with the event loop, error: {:?}", e)) }, _ => Err(format!("Failed to insert into connection slab")) } }, Err(e) => Err(format!("Failed to create new socket, error:{:?}", e)) } } pub fn listen<'b>(&mut self, addr: &'b str, port: usize, event_loop: &mut Reactor) -> Result<Receiver< ProtoMsg< <T as Protocol>::Output >>, String> { let ip = get_host_addresses(addr).unwrap()[0]; match TcpSocket::v4() { Ok(s) => { //s.set_tcp_nodelay(true); TODO: re-add to mio match s.bind(&SockAddr::InetAddr(ip, port as u16)) { Ok(l) => match l.listen(255) { Ok(a) => { let (tx, rx) = sync_channel(self.config.queue_size); match self.listeners.insert((a, tx)) { Ok(token) => { event_loop.register_opt(&self.listeners.get_mut(token).unwrap().0, token, event::READABLE, event::PollOpt::edge()). map_err(|e| format!("event registration failed: {:?}", e)). map(move |_| rx) }, Err(_) => Err(format!("failed to insert into listener slab")) } }, Err(e) => {Err(format!("Failed to listen to socket {:?}:{:?}, error:{:?}", addr, port, e)) } }, Err(e) => Err(format!("Failed to bind to {:?}:{:?}, error:{:?}", addr, port, e)) }}, Err(e) => Err(format!("Failed to create TCP socket, error:{:?}", e)) } } } impl<'a, T> Handler<Token, StreamBuf> for EngineInner<'a, T> where T : Protocol, <T as Protocol>::Output : Send { fn readable(&mut self, event_loop: &mut Reactor, token: Token, hint: event::ReadHint) { debug!("mio_processor::readable top, token: {:?}", token); let mut close = false; if self.listeners.contains(token) { let calloc = &self.config.allocator; let buf_sz = self.config.read_buf_sz; let (ref mut list, ref tx) = *self.listeners.get_mut(token).unwrap(); match list.accept() { Ok(NonBlock::Ready(sock)) => { let buf = new_buf(buf_sz, calloc.clone()); match self.conns.insert(Connection::new(sock, tx.clone(), buf)) { Ok(tok) => { event_loop.register_opt(&self.conns.get(tok).unwrap().sock, tok, event::READABLE | event::HUP, event::PollOpt::edge()).unwrap(); debug!("readable accepted socket for token {:?}", tok); } Err(..) => error!("Failed to insert into Slab") }; }, e => { error!("Failed to accept socket: {:?}", e);} } event_loop.reregister(list, token, event::READABLE, event::PollOpt::edge()).unwrap(); return; } else { match self.conns.get_mut(token) { None => error!("Got a readable event for token {:?}, but it is not present in MioHandler connections", token), Some(c) => { match c.read() { Ok(NonBlock::Ready(n)) => { debug!("read {:?} bytes", n); let mut abuf = c.buf.0.atomic_slice_pos_from_begin(c.marker, n as i64).unwrap(); loop { match c.proto.append(&abuf) { None => {break}, Some((item, remaining, consumed)) => { c.conn_tx.send( ProtoMsg(item, token)); abuf = remaining; c.marker += consumed; } } } if c.buf.0.len() < self.config.min_read_buf_sz as u32 { let mut newbuf = new_buf(self.config.read_buf_sz, self.config.allocator.clone()); if abuf.len() > 0 { // we didn't eat all of the bytes we just read // so we must move them to the new buffer unsafe { newbuf.0.fill(abuf.as_window_slice()) }; } c.buf = newbuf; c.marker = 0; } } Ok(NonBlock::WouldBlock) => { debug!("Got Readable event for socket, but failed to write any bytes"); }, Err(e) => error!("error reading from socket: {:?}", e) }; if hint.contains(event::HUPHINT) { close = true; } else { c.interest.insert(event::READABLE); event_loop.reregister(&c.sock, token, c.interest, event::PollOpt::edge()).unwrap(); } } } if close { self.conns.remove(token); } } } fn writable(&mut self, event_loop: &mut Reactor, token: Token) { debug!("mio_processor::writable, token: {:?}", token); if let Some(c) = self.conns.get_mut(token) { if c.drain_write_queue_to_socket() > 0 { c.interest.insert(event::WRITABLE); event_loop.reregister(&c.sock, token, c.interest, event::PollOpt::edge()).unwrap(); } } } fn notify(&mut self, event_loop: &mut Reactor, msg: StreamBuf) { let tok = msg.1; match self.conns.get_mut(tok) { Some(c) => { c.outbuf.push_back(msg); if c.drain_write_queue_to_socket() > 0 { c.interest.insert(event::WRITABLE); event_loop.reregister(&c.sock, tok, c.interest, event::PollOpt::edge()).unwrap(); } }, None => {} } } fn timeout(&mut self, event_loop: &mut Reactor, tok: Token) { let (ref mut cb, ref handle) = *self.timeouts.get_mut(tok).unwrap(); if !(*cb).call_mut((event_loop,)) { event_loop.clear_timeout(handle.unwrap()); } } } fn new_buf(sz: usize, calloc: Option<Arc<Box<Allocator>>>) -> ReadBuf { if let Some(alloc) = calloc { ReadBuf(AppendBuf::new_with_allocator(sz, alloc)) } else { ReadBuf(AppendBuf::new(sz)) } }
{ "redpajama_set_name": "RedPajamaGithub" }
367
Q: Fantasy book involving a post modern civilization and a young man on a quest with two large greyhound-type dogs People live in barricaded villages because wild dogs are a problem. The government 'taxes' men to dig in ruins, trying to find a particular giant computer. After this young man starts his quest, two large greyhound-type dogs join him. He names them Followree & Followro, male & female. At some point they stand up on their hind legs and prophecy. He tells a myth 'explaining' the civil collapse: "That little man Atom was broken in two, his head separated from his body", and there were consequences from that. He came across a black Mandelbrot Buddha-type figure that he understood was a powerful Female force or symbol. I do not remember the ending, but I was interested in the author who did not write a lot but had another book with (?) Gurdjieff-type cosmic levels of consciousness (I think). It was probably published in the 1980s or earlier. A: Sounds like you're thinking of Riddley Walker (1980) by Russell Hoban. From Goodreads: In the far distant future, the country laid waste by nuclear holocaust, twelve-year-old Riddley Walker tells his story in a language as fractured as the world in which he lives. As Riddley steps outside the confines of his small world, he finds himself caught up in intrigue and a frantic quest for power, desperately trying to make sense of things. As detailed in the synopsis above, the story is set in a post-apocalyptic future and features a twelve-year-old male protagonist, named Riddley, on a quest to make sense of things. Riddley speaks in broken English, and according to this quote provided within a Goodreads user review, he meets a pair of dogs, named Folleree & Folleroo, that stand up on their hind legs and talk like men. "7. Thay dogs stud up on thear hyn legs & taukin lyk men. Folleree sed, Lukin for the 1 you wil aul ways fyn thay 2. Folleroo sed, They 2 is twice as bad as the 1."
{ "redpajama_set_name": "RedPajamaStackExchange" }
330
{"url":"https:\/\/physics.stackexchange.com\/questions\/203248\/tensors-indices-and-matrix-notation-is-there-a-common-convention","text":"# Tensors, indices and matrix notation - is there a common convention?\n\nFor a tensor named T with two indices, there are four possibilities: $T_{ij}$ , $T_i^{\\ j}$, $T^i{\\ _j}$ and $T^{ij}$. Is there a common convention as to how these tensors would be represented as matrices, i.e. where the entries would go? Is it the left-right order of the indices that determines which matrix entry is meant, or some other convention? What if the the order of the indices in a mixed tensor is not indicated at all (as in $T_i^j$)? Is it true that, for instance, the component with i=2 and j=3 would go on the second row and the third column in all of the above cases? The books will just say \"$F_{\u03bc\u03bd}$ = [some matrix]\", and you don't know which is which.\n\nBelow is an example that is in itself contradictory. To convey the idea that F is antisymmetric, they use two different conventions in the very same line - here it is the order of the Greek subscripts that determines the order.\n\n$$F_{\\mu \\nu} = \\left( \\begin{array}{cccc} 0 & -E_1 & -E_2 & -E_3 \\\\ E_1 & 0 & B_3 & -B_2 \\\\ E_2 & -B_3 & 0 & B_1 \\\\ E_3 & B_2 & -B_1 & 0 \\end{array} \\right) = -F_{\\nu \\mu}$$\n\n\u2022 This might be better for math.stackexchange.com Aug 29, 2015 at 17:23\n\u2022 I thought that indices on top vs indices on bottom had something to do with co- or contravariance, but for what I do (fluid and solid mechanics), we only ever dealt with indices on the bottom so I don't know for sure. Aug 29, 2015 at 17:48\n\u2022 @AcidJazz This problem only ever comes up in physics, mathematicians tend to be more pedantic about clarity, thankfully. Aug 29, 2015 at 17:52\n\u2022 Well, in that case, and hopefully the author's original setup is maintained throughout the book then, on a practical level, I take the first example he\/she gives, identify the element order and trust that the author follows through. But between typos, editors (or lack of them) and the author's inconsistencies, you can be left stranded many times:) Been there.....\n\u2013\u00a0user81619\nAug 29, 2015 at 18:01\n\u2022 This is a good question that too many authors neglect to clarify. For my own part I always intend for the first index to index rows, and I never stack indices vertically except maybe on $\\delta$, but I can't say for sure this is universal.\n\u2013\u00a0user10851\nAug 29, 2015 at 20:51\n\nIn my experience, reading the indices left to right and top to bottom, the first index is the row and the second is the column.\n\nYour screenshot from Carroll doesn't have to be contradictory (although it's definitely confusing\/doesn't make rigorous sense). You can just imagine he omits a little \"$_{\\mu \\nu}$\" on the matrix:\n\n$$F_{\\mu \\nu}=\\Bigg( \\cdots \\Bigg)_{\\mu \\nu}=-F_{\\nu \\mu}$$\n\nnow it's a true real number equation.\n\nYour example is an outlier, in my experience (personally, I would have written $(F_{\\mu\\nu})^T$ instead of $F_{\\nu\\mu}$). Almost always, it's the order of the indices that determines row vs. column. If someone writes $T^i_j$, then while technically there's no way to tell, I would say that it would be far less confusing to make the upper index label the rows and the lower index label the columns. This is because a mixed tensor can be regarded as a linear transformation between vectors:\n\n$$v^i = T^i_j v^j$$\n\nIf we want to express this linear transformation as a multiplication of a matrix by a vector, then $j$ should label the columns, since that's the index that is being contracted with the vector $v^j$.\n\nThe bottom line, however, is that 99% of the time there will be a left\/row index and a right\/column index. Representing a tensor as a matrix with any other convention is confusing and should not be done, unless the author has a very strong reason to do so.\n\n\u2022 Good points, thanks. I would still prefer $(F^T)_{\u03bc\u03bd}$ to your $(F_{\u03bc\u03bd})^T$, because the transpose of a matrix makes sense, whereas putting the T for transposing outside the brackets makes you think it's all the entries being \"transposed\" independently. Aug 29, 2015 at 21:44","date":"2022-06-25 20:06:37","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7618920207023621, \"perplexity\": 366.9215823104102}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656103036099.6\/warc\/CC-MAIN-20220625190306-20220625220306-00269.warc.gz\"}"}
null
null
\section{Introduction} Several algorithms for factoring integers $n$ (including Dixon's random squares algorithm \cite{dixon}, the quadratic sieve \cite{pom0}, the multiple polynomial quadratic sieve \cite{silverman}, and the number field sieve \cite{buhler} -- see \cite{pom3} for a nice expository article on factoring algorithms) work by generating a pseudorandom sequence of integers $a_1,a_2,...$, with each $$ a_i\ \equiv\ b_i^2 \pmod{n}, $$ until some subsequence of the $a_i$'s has product equal to a square. Say we have such a subsequence $$ a_{i_1}, ..., a_{i_k},\ {\rm where}\ Y^2\ =\ a_{i_1}\cdots a_{i_k}, $$ and set $$ X^2\ =\ (b_{i_1}\cdots b_{i_k})^2. $$ Then $$ n\ |\ Y^2 - X^2\ =\ (Y-X)(Y+X), $$ and there is a fair chance that ${\rm gcd}(n, Y - X)$ is a non-trivial factor of $n$. If so, we have factored $n$. In his lecture at the 1994 International Congress of Mathematicians, Pomerance \cite{pom1,pom2} observed that in the (heuristic) analysis of such factoring algorithms one assumes that the pseudo-random sequence $a_1,a_2,...$ is close enough to random that we can make predictions based on this assumption. Hence it makes sense to formulate this question in its own right, in particular to determine whether this part of the factoring algorithm can be significantly sped up. \bigskip \noindent {\bf Pomerance's Problem.} Select positive integers $a_1,a_2,\dots \leq x$ independently at random (that is, $a_j=m$ with probability $1/x$ for each integer $m,\ 1\leq m\leq x$), until some subsequence of the $a_i$'s has product equal to a square. When this occurs, we say that the sequence has a {\it square dependence}. What is the expected stopping time of this process ? \medskip To discuss the history of this problem, and our own work, we need to introduce some notation:\ Let $\pi(y)$ denote the number of primes up to $y$. Call $n$ a $y$-{\sl smooth integer} if all of its prime factors are $\leq y$, and let $\Psi(x,y)$ denote the number of $y$-smooth integers up to $x$. Let $y_0=y_0(x)$ be a value of $y$ which maximizes $\Psi(x,y)/{y}$, and let \begin{eqnarray} \label{J0_def} J_0(x)\ :=\ \frac{ \pi(y_0)}{\Psi(x,y_0)} \cdot x. \end{eqnarray} In Pomerance's problem, let $T$ be the smallest integer $t$ for which $a_1,...,a_t$ has a square dependence (note that $T$ is itself a random variable). In 1985, Schroeppel gave a simple argument to justify that for any $\epsilon>0$ we have $$ {\rm Prob}( T\ <\ (1+\epsilon)J_0(x))\ =\ 1-o(1) $$ as $x\to \infty$, and in 1994 Pomerance showed that $$ {\rm Prob}(T\ >\ J_0(x)^{1-\epsilon})\ =\ 1-o(1). $$ as $x\to \infty$. Therefore there is a transition from ``unlikely to have a square product'' to ``almost certain to have a square product'' at $T=J_0(x)^{1+o(1)}$. Pomerance asked in [3] whether there is a sharper transition, and we conjecture that $T$ has a {\it sharp threshold}: This would mean that there exists a function $f(x)$ such that for every $\epsilon > 0$, \begin{equation} \label{Tf} {\rm Prob}(T \in [ (1-\epsilon)f(x),\ (1+\epsilon)f(x)])\ =\ 1 - o(1) \end{equation} as $x\to \infty$. In fact we believe that this threshold is $f(x)=e^{-\gamma} J_0(x)$: \bigskip \begin{conjecture}\label{conjecture} For every $\epsilon > 0$ we have \begin{equation} \label{Tg} {\rm Prob}(T \in [ (e^{-\gamma}-\epsilon) J_0(x),\ (e^{-\gamma}+\epsilon) J_0(x)])\ =\ 1 - o(1), \end{equation} as $x\to \infty$, where $\gamma = 0.577...$ is the Euler-Mascheroni constant. \end{conjecture} The constant $e^{-\gamma}$ in this conjecture is well-known to number theorists. It appears as the ratio of the proportion of integers free of prime divisors smaller than $y$, to the proportion of integers up to $y$ that are prime. However this is not how it appears in our discussion, and we have failed to find a more direct route to this prediction. \bigskip The bulk of this article will be devoted to establishing the upper bound in the above conjecture. We will prove something a little weaker than the conjectured lower bound: \begin{theorem}\label{main_theorem} We have $$ {\rm Prob}(T\ \in\ [ (\pi/4)(e^{-\gamma} - \epsilon) J_0(x),\ (e^{-\gamma} + \epsilon) J_0(x)])\ =\ 1 - o(1), $$ for any $\epsilon > 0$ as $x\to\infty$. \end{theorem} To obtain the lower bound in our theorem, we obtain a good upper bound on the expected number of sub-products of the large prime factors of the $a_i$'s that equal a square, which allows us to bound the probability that such a sub-product exists, for $T<(\pi/4)(e^{-\gamma} - o(1)) J_0(x)$. This is the ``first moment method''. Schroeppel established his upper bound, $T\leq (1+o(1))J_0(x)$, by showing that by then one expects more than $\pi(y_0)$ $y_0$-smooth integers amongst $a_1,a_2,\dots ,a_T$, which guarantees that the sequence has a square dependence. (To see this, create a matrix over $\mathbb F_2$ whose columns are indexed by the primes up to $y_0$, whose rows are indexed by the numbers $i$ such that $a_i$ is $y_0$-smooth, and whose $(i,p)$th entry is given by the exponent on $p$ in the factorization of $a_i$, for each $y_0$-smooth $a_i$. Then a square dependence amongst the $a_i$ is equivalent to a dependence amongst the corresponding rows of our matrix, so that we are guaranteed a square dependence once the matrix has more than $\pi(y_0)$ rows.) If we replace the complicated random model which creates this matrix by one in which any given row appears as a row of this matrix with equal probability then one expects a linear dependence only once the matrix has more than $\pi(y_0)-O(1)$ rows (see section 3.1 of \cite{CGPT} for details; also see \cite{Calkin} for a lower bound in a related model of choosing binary vectors of fixed weight randomly, until finding a $GF(2)$-dependent set). Schroeppel's approach is not only good for theoretical analysis, in practice one searches among the $a_i$ for $y_0$-smooth integers and hunts amongst these for a square dependence, using linear algebra in $\mathbb F_2$ on the primes' exponents. Computing specialists have also found that it is easy and profitable to keep track of $a_i$ of the form $s_i q_i$, where $s_i$ is $y_0$-smooth and $q_i$ is a prime exceeding $y_0$; if both $a_i$ and $a_j$ have exactly the same large prime factor $q_i=q_j$ then their product is a $y_0$-smooth integer times a square, and so can be used in our matrix as an extra smooth number. This is called the {\sl large prime variation}, and the upper bound in Theorem 1 of \cite{CGPT} is obtained by computing the limit of this method (to obtain a constant, in place of $e^{-\gamma}$ which is a tiny bit smaller than $3/4$). One can also consider the {\sl double large prime variation} in which one allows two largish prime factors so that, for example, the product of three $a_i$s of the form $pqs_1, prs_2, qrs_3$ can be used as an extra smooth number. Experience has shown that each of these variations has allowed a small speed up of various factoring algorithms (though at the cost of some non-trivial extra programming), and a long open question has been to formulate all of the possibilities for multi-large prime variations and to analyze how they affect the running time. Sorting out this combinatorial maze has been the most difficult part of our work. When our process terminates (at time $T$) we have some subset $I$ of $a_1,...,a_T$, including $a_T$, whose product equals a square.\footnote{Note that $I$ is unique, else if we have two such subsets $I$ and $J$ then $(I\cup J)\setminus (I\cap J)$ is also a set whose product equals a square, but does not contain $a_T$, and so the process would have stopped earlier than at time $T$.} It is not hard to show that this square product is $T^2$-smooth (see Section~3.2 of~\cite{CGPT}); here we give a more precise idea of what $I$ looks like: \begin{theorem} \label{Theorem_2} \ a)\ In the special case that for $\epsilon > 0$, conditional on the event $\{T < (\pi/4)(e^{-\gamma} - \epsilon) J_0(x)\}$, we find that $I$ consists of a single number $a_i$ (which is therefore a square) with probability $1-o(1)$. \medskip b) \ In general, with probability $1-o(1)$, we have that \begin{equation} \label{1.1} y_0 \exp( -(c_3+\epsilon) \sqrt{\log y_0})\ \leq\ |I|\ \leq\ y_0 \exp( (c_3+\epsilon) \sqrt{\log y_0})], \end{equation} where $c_3=\sqrt{2-\log 2}$. In other words, when the algorithm terminates the square product $I$ is, almost certainly, composed of $y_0^{1+o(1)}=J_0(x)^{1/2+o(1)}$ numbers $a_i$. \medskip c) \ Also, with probability $1-o(1)$ all the elements of $I$ are $$ y_0^2 \exp( (2+\epsilon) \sqrt{\log y_0\log\log y_0}) {\rm -smooth}. $$ \end{theorem} The last part of this result confirms the long held suspicion that the earliest occurring square products are almost always composed only of smooth numbers with a suitable smoothness parameter, though the smoothness bound that we give may be significantly larger than is possible, for all we know. We expect that one can give more precise descriptions of $I$, specifying more precisely how large $I$ is, and improving the smoothness bound on the elements of $I$, perhaps even to $y_0\phi(x)$ for any function $\phi$ for which $\phi(x)\to\infty$ as $x\to\infty$. \bigskip There are now several theorems along the lines of Conjecture 1 in the literature, including some quite general approaches. Friedgut's theorem \cite{friedgut}, characterizing a {\em coarse threshold} for monotone or symmetric\footnote{That is, invariant under permutations of the elements involved.} graph properties, has been instrumental in proving the existence of a sharp threshold for several graph properties. However it does not seem to be applicable in the present context, since the square dependence problem is not symmetric. Bourgain's strengthening of sorts of Friedgut's theorem (see the appendix to \cite{friedgut}) is in principle applicable in the present context, though various researchers have not yet succeeded in doing so. \bigskip Pomerance's main goal in enunciating the random squares problem was to provide a model that would prove useful in analyzing the running time of factoring algorithms, such as the quadratic sieve. In \cite{CGPT} we analyzed the running time of Pomerance's random squares problem to show that the running time will be inevitably dominated by finding the actual square product once we have enough integers. Indeed this carries over to an analysis of the quadratic sieve factoring algorithm (and presumably the other factoring algorithms as well); a consequence is that to optimize the running time of the quadratic sieve we look for a square dependence among the $y$-smooth integers with $y$ significantly smaller than $y_0$, so that Pomerance's problem is not quite so germane to the question as it had at first appeared. Anyway, see \cite{CGPT} for further discussion of these issues. \bigskip The paper is organized as follows. In section 2, we derive the necessary technical lemmas involving smooth numbers. In section 3, we derive the lower bound for $T$ given in Theorem~\ref{main_theorem}, and develop these ideas to prove Theorem~\ref{Theorem_2}. Finally, in section 4, we develop our analysis of multiprime variations. \section{Smooth numbers} In previous analyses of these questions, authors have typically used estimates for $\Psi(x,y)$ for $y$ a fixed power of $y_0$. In this range one can determine an asymptotic for $\Psi(x,y)$ in terms of a saddle point, an implicit quantity. It has proved to be difficult to deduce an asymptotic for $\Psi(x,y)$, or even something close, in terms of simple explicit functions. One of the key innovations in this article is to by-pass this issue by comparing values of $\Psi(x,y)$ for different, but closely related, values of $x$ and $y$: Since the saddle points are not too different one can obtain sharp explicit estimates for the ratio of two such $\Psi$-values. In this technical section we deduce several such results, primarily from the deep work of Hildebrand and Tenenbaum \cite{hild}, which will come in useful later. \subsection{Classical smooth number estimates} From \cite{hild} we have that the estimate \begin{equation} \label{2.1} \Psi(x,y) = x\rho(u) \left\{ 1 + O\left( \frac{\log (u+1)}{\log y} \right) \right\} \quad {\rm\ as\ }\quad x \rightarrow \infty \quad {\rm where}\quad x=y^u, \end{equation} holds in the range \begin{equation} \label{2.2} \exp \left ( (\log\log x)^2 \right )\ \leq\ y\ \leq\ x, \end{equation} where $\rho(u)=1$ for $0 \le u \le 1$, and where $$ \rho(u) = \frac{1}{u} \int_{u-1}^{u} \rho(t)\, dt \ \ {\rm for\ all}\ u>1. $$ This function $\rho(u)$ satisfies $$ \rho(u)\ =\ \exp ( -(u + o(u)) \log u ); $$ and so \begin{equation} \label{psi_estimate} \Psi(x,y)\ =\ x \exp(-(u+o(u))\log u). \end{equation} Now let $$ L:=L(x) = \exp \left( \sqrt{ \frac 12 \log x \log\log x} \right). $$ Then, using (\ref{psi_estimate}) we deduce that for $\beta > 0$, \begin{equation} \label{psi_estimate_2} \Psi(x, L(x)^{\beta+o(1)})\ =\ x L(x)^{-1/\beta + o(1)}. \end{equation} From this one can easily deduce that \begin{equation} \label{J0y0} y_0(x)=L(x)^{1+o(1)},\ {\rm and\ } J_0(x)=y_0^{2-\{1+o(1)\}/\log\log y_0}=L(x)^{2+o(1)}, \end{equation} where $y_0$ and $J_0$ are as in the introduction (see (\ref{J0_def})). From this we can deduce the following basic estimate, which we will use in later proofs: \begin{lemma} \label{first_lemma} Fix constant $\beta > 0$. If $y = y_0^{\beta + o(1)}$ then $$\\ {\Psi(x,y)/y \over \Psi(x,y_0)/y_0}\ =\ y_0^{2 - \beta - \beta^{-1} + o(1)}. $$ \end{lemma} \subsection{Hildebrand-Tenenbaum saddle point method estimates} \noindent For any $\alpha>0$, one has \begin{equation} \label{2.3} \Psi(x,y)\leq \sum_{n\leq x\atop P(n)\leq y} (x/n)^\alpha \leq x^{\alpha}\xi(\alpha,y), \end{equation} where $$ \xi(s,y)\ =\ \prod_{p\le y}\Bigl(1-\frac{1}{p^s}\Bigr)^{-1}. $$ Define $\alpha=\alpha(x,y)$ to be the solution to \begin{equation}\label{alpha_def} \log x = \sum_{p\leq y} \frac{\log p}{p^\alpha-1} . \end{equation} By \cite[Theorem 1 and (7.19)]{hild} we obtain in the range (\ref{2.2}) with $u\to \infty$, \begin{equation} \label{2.4} \Psi(x,y)\sim \frac{x^{\alpha}\xi(\alpha,y)}{\alpha\sqrt{2\pi\log x\log y}}. \end{equation} Let $\xi=\xi(u)$ be the solution to $e^\xi=u\xi+1$ so that \begin{equation} \label{2.4b} \xi(u)\ =\ \log(u\log u) + {(1+o(1))\log\log u \over \log u},\ {\rm as\ }u \to \infty. \end{equation} Note also that $\xi'(u)\sim 1/u$. In the range (\ref{2.2}) it turns out that \begin{equation} \label{2.5} (1-\alpha(x,y)) \log y = \xi(u) +O(1/u) \end{equation} which implies that \begin{equation} \label{2.6} y^{1-\alpha} = e^{\xi(u)} (1+O(1/u)) = u\xi(u)(1+O(1/u)). \end{equation} So, for $$ y\ =\ L(x)^{\beta+o(1)}\ =\ y_0^{\beta + o(1)} $$ we have \begin{equation} \label{y_alpha} y^{1-\alpha}\ \sim\ \beta^{-2} \log y\ \sim\ \beta^{-1} \log y_0. \end{equation} \noindent By \cite[Theorem 3]{hild} and (\ref{2.5}) above, we have \begin{equation} \label{2.7} \Psi\left( \frac xd,y\right) = \frac 1{d^{\alpha(x,y)}} \Psi(x,y) \left\{1+O\left(\frac{1}{u} +\frac{\log y}y\right)\right\},\ {\rm when\ } 1\ \leq\ d\ \leq\ y\ \leq\ {x \over d}. \end{equation} \bigskip \begin{proposition}\label{Proposition_2.1} Throughout the range (\ref{2.2}), for any $1 \leq d \leq x$, we have $$ \Psi\left( \frac xd,y\right) \leq \frac 1{d^{\alpha(x,y)}} \Psi(x,y) \{ 1+o(1)\} , $$ where $\alpha$ is the solution to (\ref{alpha_def}). In fact, $$ \Psi \left ( {x \over d},y\right )\ <\ {\Psi(x,y) \over d^{\alpha(x.y)}}, $$ provided that\\ $$ \frac{\log d}{\log u\log y + \sqrt{ u \log u\log y}} \to \infty \, . $$ \end{proposition} \bigskip \noindent {\bf Proof}. By (\ref{2.1}), for $d=y^r$ with $0 \leq r \leq u/2$, we have $$ {\Psi\left( \frac xd,y\right)d^{\alpha} \over \Psi(x,y)}\ =\ {d^{-(1-\alpha)} \rho(u-r) \over \rho(u)} \ \left(1+O\left({\log(u+1)\over \log y}\right)\right). $$ The logarithm of the main term on the right side is $$ -(1-\alpha)r \log y + \log (\rho(u-r)/\rho(u)). $$ Using the fact that $u = (\log x)/(\log y)$, this can be rewritten as $$ r(\xi(u) - (1-\alpha)\log y) + \left( -\int_{u-r}^u \frac{\rho'(v)}{\rho(v)}dv -r\xi(u)\right) . $$ The first term is $O(r/u)$ by (\ref{2.5}). Corollary 8.3 of \cite{tenen} gives that \begin{equation} \label{2.8} -\rho'(v)/\rho(v) = \xi(v) (1+O(1/v))\,, \end{equation} so that the second term equals $$ -\int_0^r (\xi(u)-\xi(u-t)) dt +O(r\log u/u). $$ Now, differentiating $e^\xi=u\xi+1$ we obtain $$ \xi+u\xi'\ =\ \xi' e^\xi\ =\ \xi' (u\xi+1), $$ so that $$ \xi'\ =\ {1 \over u - (u-1)\xi^{-1}}\ =\ {1 \over u(1 + O(1/\log u))} =\ {1 \over u} \left(1+O\left(\frac 1{\log u}\right)\right) . $$ Therefore \begin{eqnarray} \label{xi_integral} \int_0^r (\xi(u)-\xi(u-t)) dt &=& \int_0^r (r-v)\xi'(u-v) dv = \left(1+O\left(\frac 1{\log u}\right)\right)\int_0^r \frac{(r-v)}{(u-v)} dv \nonumber \\ &=& \left(1+O\left(\frac 1{\log u}\right)\right) (r-(r-u)\log (1-r/u))\,. \end{eqnarray} \noindent Combining this with the above yields that \begin{eqnarray} \log\left( {\Psi\left( \frac xd,y\right)d^{\alpha} \over \Psi(x,y)} \right)\ &=&\ - \left ( 1 + O \left ( {1 \over \log u} \right ) \right ) (r - (r-u) \log (1 - r/u))\nonumber \\ &&\hskip1.5in +\ O \left ( {r \log u \over u} +\frac{\log (u+1)}{\log y} \right) \nonumber \\ &=&\ -\frac{r^2}{2u} \left\{ 1 + O\left( {r \over u} + \frac 1{\log u} +\frac {\log u}r \right) \right\} + O\left( \frac{\log (u+1)}{\log y} \right). \nonumber \end{eqnarray} From (\ref{xi_integral}) and the first equation here we find that this is negative provided $r \leq u/2$ and $(\log u + \sqrt{u \log u / \log y}) / r \to 0$, and is $o(1)$ in the complementary range. If $d>\sqrt{x}$ we simply iterate the above result: The proposition follows by noting that $\alpha(x,y)$ is a decreasing function in $x$ for fixed $y$, by definition. \hfill $\Box$ \bigskip We will require the following lemma, which is in one sense stronger, and in another sense weaker, than Lemma \ref{first_lemma}. \begin{lemma} \label{Lemma_2.2} We have $$ {\Psi(x,y) \over y}\ = o\left( {\Psi(x,y_0) \over y_0(\log y_0)^{1+\epsilon/4} } \right) $$ for all $y$ outside of the range \begin{equation} y_0 \exp( -(1+\epsilon) \sqrt{\log y_0\log\log y_0})\ \leq\ y\ \leq\ y_0 \exp( (1+\epsilon) \sqrt{\log y_0\log\log y_0}); \end{equation} and $$ {\Psi(x,y) \over y}\ \leq\ {(2/e^2-\epsilon) \Psi(x,y_0) \over y_0} $$ for all $y$ outside of the range \begin{equation} y_0 \exp( -(c_3+\epsilon) \sqrt{\log y_0})\ \leq\ y\ \leq\ y_0 \exp( (c_3+\epsilon) \sqrt{\log y_0}). \end{equation} \end{lemma} \noindent {\bf Proof}. Let $x=y_0^{u_0}$. Define $g(u)=g_x(u)=\log \rho(u) - u^{-1} \log x$. By (\ref{2.1}) we have $\log (\Psi(x,y)/xy) = g(u) +O(1/u)$, provided $\log y\asymp \log L$. Select $u_1$ to maximize $g(u)$. Therefore $g(u_1)\geq g(u_0)$ by definition of $u_1$; and $g(u_0)\geq g(u_1)+O(1/u_0)$ by the definition of $u_0$ and the above estimate; therefore $g(u_0)=g(u_1)+O(1/u_0)$. \ \noindent By (\ref{2.8}), we have $g'(v) = \rho'(v)/\rho(v) +v^{-2}\log x=-\xi(v)+v^{-2}\log x +O(\log v/v)$; so that, for $t=O(u_1/\log u_1)$, \begin{eqnarray} g'(u_1+t)&=& g'(u_1+t)-g'(u_1)\nonumber \\ &=& \xi(u_1)-\xi(u_1+t)+\left(\frac1{(u_1+t)^2}-\frac1{u_1^2}\right)\log x +O\left( \frac{\log u_1}{u_1}\right)\nonumber \\ &=& O\left( \frac{t+\log u_1}{u_1}\right) -2tu_1^{-3}\log x (1+O(t/u_1)) \nonumber \\ &=& -2t\frac{\xi(u_1)}{u_1} + O\left( \frac{t+\log u_1}{u_1}\right), \nonumber \end{eqnarray} since $0=g'(u_1)= -\xi(u_1) +u_1^{-2}\log x+O(\log u_1/u_1)$. Therefore \begin{equation} \label{eq:gu1} g(u_1)-g(u_1+T) = - \int_0^T g'(u_1+t) dt = \frac{T^2}{u_1} (\xi(u_1)+O(1)) +O\left( \frac{T\log u_1}{u_1}\right) , \end{equation} for $T=O(u_1/\log u_1)$. We deduce that $u_0=u_1+O(1)$, as well as both $$ g(u)<g(u_0)-(1+\epsilon/3) \log u_0 \mbox{ for \ } |u-u_0|>(1+\epsilon/2) \sqrt{u_0}\,, $$ and $$ g(u)<g(u_0)-\log(e^2/2+\epsilon) \mbox{ for \ } |u-u_0|>(c_3+\epsilon) \sqrt{u_0/\log u_0}\, , $$ which are the desired results. \hfill $\Box$ \bigskip Next we obtain a more accurate estimate for $y_0$ than (\ref{J0y0}): \begin{lemma} \label{Lemma_2.3} We have \begin{eqnarray*} \log y_0 & = & \log L(x) \left( 1 + \frac{\log_3x-\log 2}{2\log_2x} + O\left( \left( \frac{\log_3x}{\log_2x}\right)^2 \right)\right) \;\; \mbox{ and } \\[1ex] \frac{u_0 \xi(u_0)}{\log y_0} & = & 1+ O\left( \frac 1{u_0}\right) \, . \end{eqnarray*} \end{lemma} \noindent {\bf Proof}. In the notation of the Lemma \ref{Lemma_2.2} we see by (\ref{eq:gu1}) that $|g(u_1+T)| = o(1/u_1)$ as $T \to \infty$, so that $u_0=u_1+O(1)$. We saw that $u_1^2 \xi(u_1) (1+O(1/u_1)) = \log x$, so the same equation is satisfied by $u_0$ (in place of $u_1$), and the estimate for $\log y_0=(1/u_0)\log x$ follows from (\ref{2.4b}). Moreover $u_0 \xi(u_0)= \log y_0 (1+O(1/u_0))$ \hfill $\Box$ \bigskip \begin{corollary}\label{Corollary_2.4} If $d=p_1p_2\ldots p_k$, where each $p_j$ is a prime in $(y,My]$ we have \begin{equation} \label{eq:unif} \frac{\psi (x / (p_1 \cdots p_k) \, , \, y_0)}{\psi (x , y_0)} \sim \frac{(\log y_0)^k}{p_1 \cdots p_k} \end{equation} uniformly in $k \geq 1$ and $\log M=o((\log x/\log\log x)^{1/4})$, as $x \to \infty$. Also \begin{equation} \label{eq:unif unif} \frac{\psi (x / (p_1 \cdots p_k) \, , \, y_0)}{\psi (x , y_0)} \leq 2^k \frac{(\log y_0)^k}{p_1 \cdots p_k} \end{equation} uniformly for $k \geq 1$ and $\log M = o(\log x/\log\log x)^{1/2}$, as $x \to \infty$. \end{corollary} \noindent {\bf Proof}. We use (\ref{2.7}) at most $2k$ times to obtain $$ \frac{\psi (x / (p_1 \cdots p_k) \, , \, y_0)}{\psi (x , y_0)} = \frac 1{(p_1 \cdots p_k)^{\alpha}} \left\{1+O\left(\frac{k}{u_0} +\frac{k\log y_0}{y_0}\right)\right\} \sim \frac {(p_1 \cdots p_k)^{1-\beta}} {p_1 \cdots p_k} $$ where $\alpha(x,y_0)\geq \beta\geq \alpha(x/(p_1 \cdots p_k),y_0)$. If $u'=\log(x/(p_1 \cdots p_k)/\log(y_0)$ then $u'=u+O(k)$ and so $y_0^{1-\beta}=u_0\xi(u_0)\{ 1+O(k/u_0)\}=\log y_0\{ 1+O(k/u_0)\}$, by (\ref{2.6}) and then Lemma \ref{Lemma_2.3}. Hence we obtain (\ref{eq:unif}) as $k^2=o(u_0)$ and, in our range, $$ M^{k(1-\alpha)}=\exp( O(k \log M (\log\log y_0)/( \log y_0)))=1+o(1). $$ To obtain (\ref{eq:unif unif}) we can use the same estimates but now we simply need $k / u_0 \to 0$ so that $y_0^{1-\beta}\leq (4/3) \log y_0$, and $\log M / u_0$ so that $M^{1-\beta}\leq (4/3)$. \hfill $\Box$ \subsection{Straightforward analytic estimates} We complete this section by collecting together various straightforward analytic estimates that will be needed later. Fix $0 < a < b$. By the prime number theorem, we have \begin{equation} \label{eq:PNT 1} \sum_{ay < q \leq by } \frac{\log y}{q} \sim \log \left ( \frac{b}{a} \right ) \, . \end{equation} where the sum is over primes $q$, and also that \begin{equation} \label{eq:PNT 2} \sum_{ay < q \leq by} \frac{\log y}{q} \leq 2 \log \left ( \frac{b}{a} \right ) \, , \end{equation} for all $1 \leq a \leq b/2$, once $y$ is sufficiently large. To see this note that, since \\ $\sum_{q\leq Q} (\log q)/q=\log Q+C+o(1)$, for some constant $C$, the sum is $$ \leq \sum_{ay < q \leq by} \frac{\log q}{q} =\log \left ( \frac{b}{a} \right ) +o_{y\to \infty}(1), $$ and the result follows. \medskip \begin{lemma} \label{Lemma_integral1} Let \begin{equation} \label{eq:def g} g(\beta,C)\ :=\ \beta^{-2} \int_0^{C/\beta^2} \log\left( \frac{e^{z} + e^{-z}}2 \right) {dz \over z^2} + 1- \log(C) . \end{equation} The function $g(1,C)$ is decreasing for $C>0$, with $$ \lim_{ C\to \infty} g(1,C) = \gamma+\log(4/\pi)\,. $$ \end{lemma} \noindent {\bf Proof}. Since $$ \frac{dg(1,C)}{dC} = {\log( \frac 12 (e^C + e^{-C})) \over C^2}\ -\ {1 \over C}\ < 0, $$ for all $C>0$, we minimize by letting $C\to \infty$. Integrating by parts, we have that $$ \lim_{ C\to \infty} g(1,C) = \int_0^1 \frac{e^{z}-e^{-z}}{e^{z}+e^{-z}} \frac{dz}{z } -2\int_1^\infty \frac{e^{-z}}{e^{z}+e^{-z}} \frac{dz}{z} . $$ Now 6.1.50 of \cite{AS} states that $$ \log \Gamma(s)= \int_0^\infty \left( (s-1) e^{-t} - \frac{e^{-t}-e^{-st}} {1-e^{-t}}\right) \frac {dt}t; $$ and the third line of 6.3.22 of \cite{AS} readily implies that \begin{equation} \label{gamma} \gamma = \int_0^1 (1-e^{-t}) \frac{dt}t -\int_1^\infty e^{-t} \frac{dt}t . \end{equation} Since $\Gamma(1/2)=\pi^{1/2}$, and taking $s=1/2$ and $t=4z$, our result follows. \hfill $\Box$ \section{The lower bound for $T$ in Theorem \ref{main_theorem}, and Theorem \ref{Theorem_2} } \label{section5} \subsection{Proof strategy} To establish that $$ {\rm Prob}\Bigl(T\ >\ (\pi/4)(e^{-\gamma} - \epsilon) J_0(x)\Bigr)\ =\ 1 - o(1), $$ we show that the expected number of non-trivial subsets $S$ of $\{ 1,...,J\}$ for which $\prod_{i\in S} a_i$ is a square is $o(1)$, for $J(x) = (\pi/4)(e^{-\gamma}-o(1)) J_0(x)$. \subsection{Structure of a square product} We begin with the following proposition. \begin{proposition} \label{Proposition_5.1} Select integers $a_1,\dots , a_J$ at random from $[1,x]$. The probability that there exists a subsequence $I$ of the $a_i$ with $$ 2\ \leq\ |I|\ \leq\ {\log x \over 2\log\log x} \ \textit{ for\ which\ } \prod_{a \in I} a\ \textit{is a square} $$ is $O(J^2\log x/x)$ provided $J< x^{o(1)}$. \end{proposition} \noindent {\bf Proof}. Suppose that $b_1,\dots ,b_k$ were chosen at random from $[1,x]$. The probability that $b_1b_2\dots b_k$ is a square equals $$ x^{-k} |\{ b_1,\dots ,b_k\ \leq\ x\ :\ b_1b_2\dots b_k \in \mathbb Z^2\}|. $$ Now write each $b_i$ uniquely as $$ b_i\ =\ c_iu_i^2,\ {\rm where\ }c_i\ {\rm is\ squarefree}. $$ Assuming that $b_1 \cdots b_k$ is a square, which implies $c_1 \cdots c_k$ is a square, define the doubly indexed sequence $c_{i,j}$, where $i,j=1,...,k$ and $i \neq j$, to be any satisfying the relations \begin{equation} \label{ci_properties} c_{i,j}\ =\ c_{j,i}, \ \ {\rm with\ }\ c_i\ =\ \prod_{j \neq i} c_{i,j} \ \ {\rm for\ each \ } i. \end{equation} The fact that such $c_{i,j}$ exist can be seen as follows: For each prime $p$ dividing $c_1 \cdots c_k$, we will need to decide which $c_{i,j}$ that $p$ divides; and, to do this, suppose that $p$ divides $c_{i_1},...,c_{i_{2t}}$ (the reason it is $2t$ is that all the $c_i$ are square-free and have product a square). Then, the following $c_{i,j}$ are to be divisible by $p$, and no others: $$ c_{i_1,i_2},\ c_{i_2,i_1},\ c_{i_3,i_4},\ c_{i_4,i_3},\ ...,\ c_{i_{2t-1}, i_{2t}},\ c_{i_{2t}, i_{2t-1}}. $$ Each $c_{i,j}$ is then the product of the primes dividing $c_1 \cdots c_k$ which divide it; and if this process leaves some $c_{i,j}$ not divisible by any prime $p | c_1 \cdots c_k$, then we set $c_{i,j} = 1$. \bigskip Given $c_1,...,c_k$, the number of sequences $b_1,...,b_k$ satisfying $b_i = c_i u_i^2$ is the number of possibilities for the numbers $u_i$, which is $\leq (x/c_i)^{1/2}$; and so, the probability that $b_1\cdots b_k$ is a square is \begin{eqnarray} &\leq& \frac 1{x^k} \sum_{c_{i,j}\leq x \atop {\rm for} \ 1\leq i<j\leq k} \prod_{i=1}^k \left( \frac{x} {\prod_{j\ne i} c_{i,j}} \right)^{1/2} \nonumber \\ &\leq& \frac 1{x^{k/2}} \sum_{1\leq i<j\leq k} \left( \sum_{c_{i,j}\leq x} \frac{1} {c_{i,j}} \right) \leq \frac 1{x^{k/2}}\ (1 +\log x)^{k(k-1)/2} \end{eqnarray} since each $c_{i,j}$ appears twice in the above product. Therefore the probability that there exists $I\subset \{ 1,2,\dots ,J\}$ for which $\prod_{i\in I} a_i \in \mathbb Z^2$ with $|I|=k$ is $$ \leq {J \choose k} \frac 1{x^{k/2}}\ (1 +\log x)^{k(k-1)/2} \leq \left( \frac{ J^2 (1 +\log x)^{k-1}}{x} \right)^{k/2} $$ which gives $O(J^2\log x/x)$ for $k=2$, and is $\leq 1/x$ for $3\leq k\leq \log x/2\log\log x$. \hfill $\Box$ \subsection{The main argument} In this subsection, we prove that $$ {\rm Prob}\Bigl(T\ >\ (\pi/4)(e^{-\gamma} - \epsilon) J_0(x)\Bigr)\ =\ 1-o(1). $$ As a consequence of the upper bound proved in \cite{CGPT}, we may assume that $T < (3/4)J_0(x)$ holds with probability $1- o(1)$. Furthermore, following Proposition \ref{Proposition_5.1} we need only focus on subsequences $I$ of $a_1,...,a_J$ (where $J = T < J_0(x)$) of length exceeding $\log x/2 \log\log x$, that have product equal to a square. \bigskip Throughout we shall write $a_i=b_id_i$ where $P(b_i)\leq y$ and where either $d_i = 1$ or $p(d_i) > y$ , for $1\leq i\leq k$. Recall here that $p(n)$ denotes the smallest and $P(n)$ the largest prime divisor of $n$. If $a_1,\dots ,a_k$ are chosen at random from $[1,x]$ then \begin{eqnarray} \label{5.1} {\rm Prob}(a_1\dots a_k \in \mathbb Z^2)\ &\leq&\ {\rm Prob}(d_1\dots d_k \in \mathbb Z^2)\ \nonumber \\ &=&\ \sum_{d_1,\dots, d_k\geq 1 \atop {d_1\dots d_k \in \mathbb Z^2 \atop d_i=1\ {\rm or\ }p(d_i)>y}} \prod_{i=1}^k \frac{\Psi\left( x/d_i,y\right)}x \nonumber \\ &\leq& \left( \{ 1 +o(1)\} \frac{\Psi(x,y)}x \right)^k \sum_{n = 1\ {\rm or\ }p(n)>y} \frac{\tau_k(n^2)}{n^{2\alpha}}\,, \end{eqnarray} by Proposition \ref{Proposition_2.1}, where $\tau_k(m)$ denotes the number of different ways of writing $m$ as the product of $k$ positive integers. Out of $J=\eta J_0$ integers, the number of $k$-tuples is ${J \choose k}\leq (eJ/k)^k$; and so the expected number of $k$-tuples whose product is a square is \begin{equation}\label{5.2} \leq \left( (e+o(1)) \frac{ \eta y}{k\log y_0} \frac{\Psi(x,y)/y}{\Psi(x,y_0)/y_0} \right)^k \prod_{p>y} \left( 1 + \frac{\tau_k(p^2)}{p^{2\alpha}} + \frac{\tau_k(p^4)}{p^{4\alpha}} + \dots \right) \,. \end{equation} \bigskip We now consider $k$ in two different ranges, and in both ranges we will select different values for $y$, so as to give good upper bounds for (\ref{5.2}): \bigskip $\bullet$ First, if $$ {\log x \over 2\log\log x}\ <\ k\ \leq\ y_0^{1/4}, $$ then let $y=y_0^{1/3}$ so that $k=o(y_0^\alpha)$. Therefore the Euler product in (\ref{5.2}) is $$ \leq \exp \left( O\left( \sum_{p>y}\frac{k^2}{p^{2\alpha}} \right) \right) \leq \exp \left( O\left( \frac{k^2y^{2(1-\alpha)}}{y \log y} \right) \right) =e^{o(k)}. $$ Now $\Psi(x,y_0^\gamma) = x/y_0^{1/\gamma+o(1)}$ by (\ref{psi_estimate_2}) and therefore the quantity in (\ref{5.2}) is \begin{equation}\label{goodbound} \leq \left( \frac{1/y_0^{3+o(1)}} {k/y_0^{2+o(1)}}\right)^k \leq y_0^{-k+o(k)} , \end{equation} which is $<1/x^2$ in this first range for $k$. \bigskip $\bullet$ Next, we consider the range $$ y_0^{1/4}\ \leq\ k\ =\ y_0^\beta\ \leq\ J\ \leq\ J_0. $$ In this case we will choose $y$ so that $[k/C] = \pi(y)$, and then will optimize the $C$ later. For this choice of $y$ a simple calculation reveals that \begin{eqnarray} 1 + {\tau_k(p^2) \over p^{2\alpha}} + {\tau_k(p^4) \over p^{4\alpha}} + \cdots \ &\sim&\ 1 + {(k/p^\alpha)^2 \over 2!} + {(k/p^\alpha)^4 \over 4!} + \cdots \nonumber \\ &=&\ {e^{k/p^\alpha} + e^{-k/p^\alpha} \over 2}. \nonumber \end{eqnarray} In order to evaluate (\ref{5.2}) we need to product this over primes $p > y$. The logarithm of this product equals $$ \sum_{p > y \atop p\ {\rm prime}} \log\left( \frac{e^{k/p^\alpha} + e^{-k/p^\alpha}}2 \right) \ \sim\ \int_y^\infty {1 \over \log t} \log\left( \frac{e^{k/t^\alpha} + e^{-k/t^\alpha}}2 \right) dt, $$ by the prime number theorem. Letting $z = k/t^\alpha$, from (\ref{y_alpha}) this last integral is $$ \sim\ \int_0^{C/\beta^2} {(k/z)^{1/\alpha} \over z\log(k/z)} \ \log\left( \frac{e^{z} + e^{-z}}2 \right) dz. $$ Now, $k^{1/\alpha} \sim \beta^{-2}\log y$ by (\ref{y_alpha}) so that $$ \frac{(k/z)^{1/\alpha}}{\log (k/z)}\ \sim\ (k/z) \beta^{-2} $$ as $z = o(1)$. It follows that the quantity in (\ref{5.2}) is bounded from above by \begin{equation} \label{gbC} \left ( (1 + o(1)) e^{ g(\beta,C)} \beta \eta {\Psi(x,y)/y \over \Psi(x,y_0)/y_0} \right )^k, \end{equation} where $g(\beta,C)$ is defined in (\ref{eq:def g}). \bigskip Now, for any fixed $C$ we have, as a consequence of Lemma~\ref{first_lemma}, that (\ref{gbC}) is $o(1/x^{2})$ unless $\beta = 1 + o(1)$; and so, we really only need to consider $k = y_0^{1 + o(1)}$, as the total expected number of $k$-tuples for other values of $k$ add only $o(1/x^{2+o(1)})$. If $C=C(\epsilon)$ is sufficiently large then $ e^{g(1,C)} <4e^\gamma/\pi +\epsilon$ by Lemma~\ref{Lemma_integral1} and, since $y_0$ maximizes $\Psi(x,y)/y$ for $y=y_0$, we deduce that (\ref{5.2}) is at most $$ \leq ( (1+\epsilon) 4\eta e^\gamma/\pi)^k. $$ Therefore, if $\eta< (1-\epsilon) e^{-\gamma} \pi/4$, then this is less than $1/x^2$. \hfill $\Box$ \subsection{Proof of Theorem~\ref{Theorem_2}, part (a)} This last proof yields further useful information: If either $J<(\pi/4)(e^{-\gamma} - \epsilon) J_0(x)$, or if $k<y_0^{1 -o(1)}$ or $k>y_0^{1 +o(1)}$, then the expected number of square products with $k>1$ is $O(J_0(x)^2\log x/x)$, whereas the expected number of squares in our sequence is $\sim J/\sqrt{x}$. This justifies Theorem~\ref{Theorem_2}(a). \subsection{Proof of Theorem \ref{Theorem_2}, part (b)} The proof in Section~3.3 yielded that if we have a square product then, with probability $1+o(1)$, we have $|I|=k=y_0^{1+o(1)}$. We now assume that $k=y_0^{1+o(1)}$ with \begin{equation} \label{k_range} k\ \not\in\ [ y_0 \exp(-(c_3 + \epsilon)\sqrt{\log y_0}),\ y_0 \exp( (c_3 + \epsilon) \sqrt{\log y_0})]. \end{equation} >From the discussion following (\ref{gbC}) above, we know, by taking $C$ large, that the number of such $k$-tuples is at most $$ \left ( (4e^\gamma/\pi+\epsilon) {\eta \Psi(x,y)/y \over \Psi(x,y_0)/y_0} \right )^k. $$ By Lemma \ref{Lemma_2.2}, this is at most $$ \left ( (4e^\gamma/\pi + \epsilon)(2/e^2 + o(1)) \eta \right )^k\ <\ 1/2^k, $$ for sufficiently small $\epsilon > 0$, using the fact that $\eta < 3/4$. Therefore the expected number of $k$- tuples with product a square is $o(1)$ for all $k$ satisfying (\ref{k_range}), so that Theorem~\ref{Theorem_2}(b) follows. \hfill $\Box$ \subsection{Proof of Theorem \ref{Theorem_2}, part (c)} In the previous subsection we proved that $$ |I|\ \leq\ y_1\ :=\ y_0 \exp( (1+\epsilon) \sqrt{\log y_0\log\log y_0}), $$ with probability $1-o(1)$. In this section we prove, among other results, part (c) of Theorem~\ref{Theorem_2}. \begin{proposition} \label{Proposition_5.2} Write each $a_i=b_id_i$ where $P(b_i)\leq y=y_1<p(d_i)$, and suppose that $d_{i_1}\dots d_{i_l}$ is a subproduct which equals a square $n^2$, but such that no subproduct of this is a square. Then, with probability $1-o(1)$, we have $l=o(\log y_0)$ and $n$ is a squarefree integer composed of precisely $l-1$ prime factors, each $< y^2$, where $n\leq y^{2l}$. \end{proposition} \noindent {\bf Proof}. For ease of notation we will relabel, replacing $d_{i_1}\dots d_{i_l}$ by $d_1\dots d_l$. Note that with the choice of $y=y_1$, we have $y/l\log y\to \infty$ and $y=y_0^{1+o(1)}$, so we know that $y^{\alpha}\sim y/\log y$ by (\ref{y_alpha}). We now show that $n$ has at least $l-1$ (not necessarily distinct) prime factors, so that $n^2=d_{1}\dots d_{l} > y^{2(l-1)}$: Create a graph $G$ on the $l$ vertices $v_1,\ldots, v_l$ where, for each prime $p^q$ which (exactly) divides $n$, draw a total of $q$ edges, placing an edge between pairs of vertices $v_j$ for which $p$ divides $d_j$. Now $G$ is connected, since our square product is minimal, and so must have $\geq l-1$ edges. We now modify the argument from the start of section~3.3 (with $k$ replaced by $l$) to restrict our attention to cases in which $d_1\dots d_l\geq y^{2l}\phi(x)^2$, where $\phi(x)=y^{O(1)}$. \noindent To obtain an upper bound we may multiply through the summand, in (\ref{5.1}), by $(n/y^{l}\phi(x))^{2\theta}$, where we have chosen $\theta>0$ so that $y^{2\theta}=(2y\log l)/(l(\log y)^2)$. Then we must multiply the right side of (\ref{5.2}) through by $1/(y^{2\theta})^l \phi(x)^{2\theta}$ and change the terms in the Euler product to $(1 + \tau_l(p^2)/p^{2(\alpha-\theta)} + \tau_l(p^4)/p^{4(\alpha-\theta)} + \dots )$. First we bound the Euler product using the prime number theorem: Recall that the function $\tau_\ell(n)$ counts the number of sequences of positive integers $d_1,...,d_\ell$ such that $ d_1 \cdots d_\ell\ =\ n. $ In the case $n = p^{2k}$, this amounts to computing the number of ordered partitions of $2k$ into $\ell$ parts that are $\geq 0$; so, $$ \tau_\ell(p^{2k})\ =\ {2k + \ell - 1 \choose 2k}\ \leq\ \left \{ \begin{array}{rl} \ell(\ell+1)/2,\ &{\rm if\ } k=1; \\ (2k+\ell-1)^{2k} \over (2k)!,\ &{\rm if\ } k \geq 2.\end{array}\right. $$ For $ p\ =\ y_0^{1+o(1)}\ =\ L(x)^{1+o(1)}$, using (\ref{y_alpha}) with $\beta=1$, we have that $$ {1 \over p^{2\alpha}}\ \sim\ {\log^2 p \over p^2} $$ making the summation of terms involving $p$ in the Euler product become: $$ \{ 1+o(1)\} \frac{\ell (\ell+1)}2 \cdot \frac{\log^2p}{p^2} \cdot p^{2\theta}\,. $$ Via the prime number theorem the logarithm of the Euler product is therefore $$ \sim \frac{\ell (\ell+1)}2 \sum_{y<p<y^4} \frac{\log^2p}{p^{2-2\theta}} \sim \frac{\ell (\ell+1)}2 \int_{y}^{y^4} \frac{\log t}{t^{2-2\theta}} dt. $$ (Here the primes $p$, with $y < p < y^{4+o(1)}$, being the only relevant ones follows from comments made above the statement of Theorem~\ref{Theorem_2}.) Now $\theta\leq 1/2$ by definition, so the above calculation becomes $$ \sim \frac{\ell (\ell+1)}2 \cdot \frac{\log y}{(1-2\theta) y^{1-2\theta}} = \frac{\ell (\ell+1)}2 \cdot \frac{\log^2 y}{y^{1-2\theta} (1-2\theta)\log y }\,. $$ Now $y^{1-2\theta}=\ell \log^2 y/2\log \ell$, so the above is $$ = \frac{ (\ell+1) \log \ell } { \log \left( \ell \log^2 y/2\log \ell\right) } = \ell \left( 1 + O\left( \frac{\log\log \ell}{\log \ell}\right) \right) = \{ 1+o_{\ell\to\infty}(1)\} \ell\,. $$ So putting (\ref{5.2}) to use as explained above, the expected number of such $l$-tuples is \begin{equation} \label{5.39} \leq \frac 1{\phi(x)^{2\theta}} \left( (e+o(1)) \ \frac{ \eta y}{\ell y^{2\theta}\log y_0}\ \frac{\Psi(x,y)/y}{\Psi(x,y_0)/y_0} \right)^l \ e^{(1+o(1)) \ell} \end{equation} \begin{equation} \label{5.399} = \frac 1{\phi(x)^{2\theta}} \left( (e+o(1)) \ \frac{\eta (\log^2 y)}{(2\log \ell)(\log y_0)}\ \frac{\Psi(x,y)/y}{\Psi(x,y_0)/y_0} \right)^l \ e^{(1+o(1)) \ell} \end{equation} \begin{equation} \label{5.4} \ll \frac {1}{\phi(x)^{2\theta}(\log y_0)^{\epsilon l/5}} \,, \end{equation} as $\eta \le 1$, and by Lemma~{\ref{Lemma_2.2}} for $y=y_1$. Now we are ready to establish the conclusions of the proposition. Take $\phi(x)=1/y$ in the above, and as $2\theta<1$ by definition, (\ref{5.4}) becomes $\ll y /(\log y)^{\epsilon \ell/5}$\,. This is $o(1)$ provided $\ell \ge 6 \log y/(\epsilon \log\log y) $, hence we expect $o(1)$ products with $l\gg \log y_0$, yielding $l=o(\log y_0)$ with probability $1-o(1)$. In this case $2\theta\sim 1$. Regarding the structure of the factorization of $n$: Taking $\phi(x)=1$, we expect $o(1)$ products with $d_1\dots d_l\geq y^{2l}$; hence $d_1\dots d_l=n^2<y^{2l}$ with probability $1-o(1)$. Since each prime divisor is $>y$, evidently $n$ has $<l$ prime factors, and so exactly $l-1$. Also, if $p$ is the largest then $y^{l-2}p<y^{l}$, that is $p<y^2$. Finally, we are left with showing that $n$ is squarefree. To obtain an upper bound on the expected number of square products $n^2$ for which $n$ is divisible by the square of a prime $>y$, we proceed much as above with $\phi(x)=1/y$, but now the Euler product has an additional factor $$ \sum_{p>y} \left( \frac{\tau_l(p^4)}{p^{4\alpha}} + \frac{\tau_l(p^6)}{p^{6\alpha}} + \dots \right) \ll \frac{l^4}{(y/\log y)^3} \ll \frac {(\log y)^7}{y^3} . $$ From (\ref{5.4}) we thus deduce that we expect $o(1)$ such square products. $\hfill \Box$ \section{Hypergraphs} \label{sect:Hyper} The main result of this section is to prove the upper bound in Theorem~\ref{main_theorem}. A roadmap for the proof is as follows. Recall that the numbers $a_1 , a_2 , \ldots$, chosen uniformly at random from $\{1,2, \ldots, x\}$, are encoded as row vectors over ${\cal F}_2$. Subsets whose product is a square are determined by combinatorial relations among these row vectors. Schroeppel's method and its variants ignore columns corresponding to primes less than $y_0$. This makes the relations easier to satisfy but we pay for it by requiring $\pi (y_0)$ many relations. To make the search more tractable, we restrict our attention to the more obvious ways of finding linear relations. Schroeppel's original method considers only the most obvious: after removing columns less than $y_0$ we must be left with all zeros. The {\sl one large prime} variation considers also the next most obvious: when we have two identical rows containing a single~1. The upper bound in Theorem~\ref{main_theorem} is proved via the {\sl $k$ large primes variation}. We consider only rows in which at most $k$ ones remain. Tractability of the analysis rests on the fact that the combinatorial structure converges as $x \to \infty$ to a random object built from a Poisson point process. In order for the convergence to be uniform, in addition to restricting $k$, we must restrict the columns: specifically, fixing $M > 0$, we must not use any $a_i$ with a prime factor greater than $M y_0$. We must also restrict the combinatorial complexity of the search for linear relations as follows: calling two rows ``neighbors'' if they share a nonzero column (whose index is now forced to be between $y_0$ and $M y_0$), any linear relation must take place within a ball of some fixed radius $m$ in the neighbor graph on rows. We may then prove that the combinatorial structure converges in an appropriate sense to a tree-like random hypergraph defined on a Poisson point process. The number of samples needed to accumulate $\pi (y_0)$ linear relations in the limiting model is computable explicitly in terms of some functions $\gamma_{m,M,k}$. For fixed $m,M,k$, these are ugly, but as $m,M,k \to \infty$, this number decreases to $e^{-\gamma} J_0$. An outline of this section is as follows. Section~\ref{ss:4.1} defines some functions that include the family $\{ \gamma_{m,M,k} \}$. A result (Theorem~\ref{th:quant}) is then formulated in terms of these functions which implies the upper bound in Theorem~\ref{main_theorem}. The subsection ends with the definition of some combinatorial structures such as tree-like hypergraphs that will be used in the search for linear relations. Section~\ref{ss:4.2} formally defines the probability model and the random objects (hypergraphs with distinguished vertices) that will witness linear relations. The number of rows neighboring any given row is shown to have finite first and second moments (Proposition~\ref{pr:counting}), which is then parlayed into an upper bound on the mean of size of the $m$-ball in the neighbor graph on rows. Section~\ref{ss:4.3} constructs the limit object, an informal description of which appears at the beginning of that subsection. Section~\ref{ss:4.4} proves convergence of the random hypergraphs in Section~\ref{ss:4.2} to the limit object of Section~\ref{ss:4.3}. Although it takes several pages, it consists merely of repeated applications of Proposition~\ref{pr:counting}. Section~\ref{ss:4.5} evaluates the probability $\theta_{m,k}^{M,\eta} (\rho)$, which is the probability in the limit model that if a row containing a single~1 in column $\rho y_0$ arises at time $\eta J_0$, it will form a new linear relation. The key result here (Lemma~\ref{lem:theta converge}) is that this is~1 when $m,M,k$ are sufficiently large and $\eta > e^{-\gamma}$. Finally, Section~\ref{ss:4.6} finishes the proof of the main theorems. \subsection{Preliminary results} \label{ss:4.1} To begin in earnest, we define the following functions, which will arise in the branching processes with finite values of $m, k$ and $M$. \begin{eqnarray*} \exp_k (z) & := & \sum_{j=0}^{k-1} \frac{z^j}{j!} \, ; \\ A_M (z) & := & \int_{1/M}^1 \frac{1 - e^{-zt}}{t} \, dt \, . \end{eqnarray*} Clearly, as $k , M \to \infty$, we have the limits \begin{eqnarray*} \exp_k (z) & \uparrow & \exp (z) \, ; \\ A_M (z) & \uparrow & A (z) := \int_0^1 \frac{1 - e^{-zt}}{t} \, dt \, . \end{eqnarray*} Recursively, define functions $\gamma_{m , M , k}$ for $m = 0, 1, 2, \ldots$ by \begin{eqnarray} \gamma_{0,M,k} (u) & := & u \, ; \nonumber \\[1ex] \gamma_{m+1,M,k} (u) & := & u \, \exp_k \left [ A_M (\gamma_{m,M,k} (u)) \right ] \, . \label{eq:gamma} \end{eqnarray} Note that $\gamma_{m,M,k} (u)$ is increasing in all four arguments. From this it follows that $\gamma_{m,M,k} (u)$ increases to $\gamma_{M,k} (u)$ as $m \to \infty$, a fixed point of the map $z \mapsto u \exp_k (A_M (z))$, so that \begin{equation}\label{gamma_u} \gamma_{M,k} (u) := u \, \exp_k \left [ A_M (\gamma_{M,k} (u)) \right ] . \end{equation} We now establish that $\gamma_{M,k} (u)<\infty$ except perhaps when $M=k=\infty$: we have $0\leq A_M (z)\leq \log M$ for all $z$, so that $u<\gamma_{M,k} (u)\leq Mu$ for all $u$; in particular $\gamma_{M,k} (u)<\infty$ if $M<\infty$. Also, $A(z)=\log z+O(1)$ which, along with~(\ref{eq:gamma}), implies that $\gamma_{\infty,k}(u) \sim u(\log u)^{k-1}/(k-1)!$; in particular $\gamma_{\infty,k} (u) < \infty$. As $M,k \to \infty$, the fixed point $\gamma_{M , k} (u)$ increases to the fixed point $\gamma (u)$ of the map $z \mapsto u e^{A(z)}$, or to $\infty$ if there is no such fixed point, in which case we write $\gamma(u)=\infty$. In Lemma \ref{lem:theta converge} we show that this map has a fixed point if and only if $u \leq e^{-\gamma}$. Otherwise $\gamma (u) = \infty$ for $u > e^{-\gamma}$ so that \begin{equation} \label{eq:eta star} \int_0^\eta \frac{\gamma (u)}{u} \, du = \infty > 1 \end{equation} for any $\eta>e^{-\gamma}$. \noindent Our main result in this section is the following: \begin{theorem} \label{th:quant} If $\eta , m , M , k$ are such that $$\int_0^\eta \frac{\gamma_{m+1,M,k} (u)}{u} \, du > 1, $$ then with probability approaching~1, as $x \to \infty$, among \, $\eta J_0$ uniform random samples from $\{ 1 , \ldots , x \}$, the $y$-smooth numbers up to $M y$ with at most $k$ large primes will contain a square subproduct. Furthermore, this will be witnessed in diameter at most $m$, in a sense to be made precise in Definitions~\ref{def:chi-marked} and \ref{def:chi} below. \end{theorem} Together with~(\ref{eq:eta star}), this establishes the upper bound in Theorem~\ref{main_theorem}. Our conjecture that the upper bound is sharp is supported by the fact that $\lim_{t \uparrow \eta_*} \int_0^t \frac{\gamma (u)}{u} \, du = 1$. \subsubsection*{Hypergraphs} A \Em{hypergraph} on a vertex set $V$ is simply a collection ${\cal H}$ of finite subsets of $V$ of cardinality at least~2. Each $S \in {\cal H}$ is called a \Em{hyperedge} of ${\cal H}$; the \Em{cardinality} of a hyperedge $S$ is its cardinality as a set. Define the \Em{support} of a hypergraph ${\cal H}$, denoted by ${\rm supp}\, ({\cal H}) := \bigcup_{S \in {\cal H}} S$, to be the union of all of its hyperedges. By a hypergraph ${\cal H}$ with vertex set $V$, we mean that ${\rm supp}\, ({\cal H}) \subseteq V$ (note: in the literature, often this language would imply ${\rm supp}\, ({\cal H}) = V$). We will typically use script letters for hypergraphs: ${\cal G}, {\cal H}$, and so forth. A rooted hypergraph is simply a hypergraph together with a choice of a distinguished element in its support. Thus, the hypergraphs on $V$ rooted at $p$ are in one to one correspondence with hypergraphs on $V$ containing $p$ in their support. \begin{defn}[tree-like hypergraphs] A finite hypergraph ${\cal G}$ rooted at $p$ is \Em{tree-like} if ${\rm supp}\, ({\cal G})$ may be given the structure of a tree $T$, rooted at $p$, in such a way that the following decomposition holds. Let $I$ denote the set of vertices that are not leaves of $T$. We require that for each $q \in I$, the set of children of $q$ may be partitioned into sets $V_{q,1} , \ldots , V_{q , n(q)}$ so that each hyperedge of ${\cal G}$ is equal to $V_{q , j} \cup \{ q \}$ for a unique pair $(q , j)$ with $q \in I$ and $j \leq n(q)$. \end{defn} A moment's thought shows that if ${\cal G}$ is a tree-like hypergraph rooted at $p$ then the tree structure on ${\rm supp}\, ({\cal G})$ satisfying the definition is unique (when $p$ is specified as the root). Denote this tree by ${\bf T}_p ({\cal G})$. Sometimes it will be desirable to allow singleton hyperedges (hyperedges consisting of a single vertex, $p$). Rather than change the definitions, we introduce the notion of a \Em{marked hypergraph}. This is just a pair $({\cal G} , U)$, where ${\cal G}$ is a finite hypergraph and $U$ is any subset of ${\rm supp}\, ({\cal G})$. We think of $U$ as telling us (by marking) which singleton edges $\{ p \}$ have been added to ${\cal G}$. Hypergraphs ${\cal G}$ and ${\cal G}'$ are defined to be isomorphic if there is a bijection $\phi : {\rm supp}\, ({\cal G} ) \to {\rm supp}\, ({\cal G}')$ inducing a bijection at the level of hyperedges. Marked hypergraphs $({\cal G} , U)$ and $({\cal G}' , U')$ are isomorphic if $\phi$ can be chosen so that also $\phi (U) = U'$. In what follows, we will require a notion of weak convergence of probability measures on hypergraphs and marked hypergraphs, which in turn requires a metric on the space of marked hypergraphs on the vertex set ${\mathbb R}$ rooted at $p$ (and we will re-normalize, replacing prime $p$ by the real number $\rho=\rho_p:=p/y$, which will thus lie in the fixed interval $(1,M]$). It will turn out that all but a vanishing fraction of our hypergraphs are tree-like, so we need only to define the metric on tree-like hypergraphs (e.g., by convention we take the distance between hypergraphs to be $+\infty$ if either one is not tree-like). If ${\cal G}$ and ${\cal H}$ are two tree-like hypergraphs, define the distance to be $+\infty$ if the two hypergraphs are not isomorphic, and otherwise define the distance to be the least $\epsilon > 0$ such that there is a bijection $\phi : {\rm supp}\, ({\cal G}) \to {\rm supp}\, ({\cal H})$ inducing an isomorphism on the hypergraphs, and satisfying $|\phi (\rho) -\rho| \leq \epsilon$ for all $\rho \in {\rm supp}\, ({\cal G})$. (Here we are dealing with re-normalized values of $p$, that is $\rho_p=p/y$, which are bounded.) In other words, the topology is discrete on the graph structure along with the product topology on the names of the vertices. Formally, $$d({\cal G},{\cal H}) := \min_\phi \left \{ \max_{\rho \in {\rm supp}\, ({\cal G})} |\phi (\rho) - \rho| \; : \; \phi \mbox{ is an isomorphism from } {\rm supp}\, ({\cal G}) \mbox{ to } {\rm supp}\, ({\cal H}) \right \} \, .$$ Define the distance between marked hypergraphs similarly, with $\phi$ now restricted to isomorphisms of the marked hypergraphs. Let $\mu$ and $\mu'$ be two probability measures on the space of hypergraphs on the vertex set ${\mathbb R}$. Say that a random pair $({\cal G} , {\cal G}')$ of hypergraphs is a coupling of $\mu$ and $\mu'$ when ${\cal G}$ has law $\mu$ and ${\cal G}'$ has law $\mu'$. Define the distance $d (\mu , \mu')$ between the probability measures $\mu$ and $\mu'$ to be the infimum of values $\epsilon > 0$ such that there is a coupling $({\cal G} , {\cal G}')$ of $\mu$ and $\mu'$ for which the probability of $d({\cal G} , {\cal G}') > \epsilon$ is at most $\epsilon$. This is a standard metrization of the weak topology, that is, $d (\mu_n , \mu) \to 0$ if and only if $\int f \, d\mu_n \to \int f \, d\mu$ for all bounded and weakly continuous functions $f$. \subsection{The random hypergraph ${\cal G}$ of $(M y)$-smooth numbers} \label{ss:4.2} Before we get started, here are a few words on notation. As before, we are selecting random positive integers $\leq x$, with $y (x)$ and $J_0 (x)$ as in Section~1. Also, as before, we will choose an integer $J := \lfloor \eta J_0 \rfloor$ for some $\eta > 0$. We will choose a real $M > 1$ and keep track of large prime factors in the interval $(y , M y)$. By the term \Em{large prime}, we will mean a prime in the interval $(y , M y)$. We will also choose an integer $k \geq 1$ and keep track only of numbers with at most $k$ large prime factors (factors in the interval $(y , M y)$); we may even choose $k = \infty$ in the range implied by the limitations given to the uniformity of (\ref{eq:unif}). We will also specify an integer $m \geq 1$ which is interpreted as the maximum chain length our algorithm will exploit when counting pseudosmooths, where a chain is a sequence $a_1 , a_2 , \ldots , a_r$, $r \leq m$, such that each consecutive pair $a_i , a_{i+1}$ share a large prime factor $p_i \in (y , M y)$. The first mission of this subsection is to define a random hypergraph which will depend on $M, J, x, m, k$ and a large prime $p \in (y , M y)$. The full notation for this will be ${\cal G}_{m,k,p}^{M,J,x}$. However, in most of the results and constructions that follow, $k, M$ and $J$ are fixed and $x$ is a size parameter fixed during each construction, while $m$ and $p$ are dynamic (the constructions are recursive in $m$ and $p$ and the proofs inductive). Because of this, we often reduce clutter in the notation by writing simply ${\cal G}_{m,p}$ with the other four parameters understood. In many of our lemmas, arises the phrase, ``$f = o(1)$ as $x \to \infty$, uniformly as $M$ and $\eta$ vary over bounded intervals and $y < p < M y$.'' To be precise about this once and for all, it means that there is a function $g$, going to zero as $x$ goes to infinity, such that $f (M , J , x, m , k , p) < g (M_0 , \eta , x , m , k)$ for all $M \leq M_0 , J \leq \eta J_0$ and $y < p < M y$ as $x \to \infty$. This holds for any fixed $m, k, M_0 , \eta$. Several times in Section~\ref{ss:conv} below we prove weak convergence results. Note: such convergence results needing to be uniform, in the manner just described, was the reason for metrizing the weak topology. Now we move on to the constructions. Fix an integer $x > 0$ and let $(\Omega_x , {\cal F}_x , {\mathbb P}_x)$ be a probability space on which is defined a sequence $\{ X_1 , X_2 , \ldots \}$ of IID random variables whose common distribution is uniform on the set $\{ 1 , 2 , \ldots , x \}$. Let $y = y_0 (x)$ and $J_0 (x) = x \pi (y) / \psi (x,y)$ be as in Section~1. For each real $M > 1$ and each integer $J > 0$, we will define a random hypergraph on the space $(\Omega_x , {\cal F}_x , {\mathbb P}_x)$, which we will denote by ${\cal G}^{M,J,x}$. Given a real number $M > 1$, we keep track of prime factors up to $M y$ as follows. For any integer $X$ that is $(M y)$-smooth, define the class $[X]$ to be the set of primes $p$ for which $y < p < M y$ and $X$ is divisible by $p$ to an odd power, that is $p \in [X]$ if and only if $y < p < M y$ and $p^i \, | \, X$ but $p^{i+1} \not| \; X$ for some odd integer $i$. If $X$ is $y$-smooth, we define $[X]$ to be the empty set. If $X$ is not $(M y)$ smooth, we pick a symbol (for probabilists, the traditional symbol is $\Delta$) and set $[X] = \Delta$. Now we define a random hypergraph with vertices in ${\mathbb R}^+$ by $${\cal G} := {\cal G}^{M,J,x} := \left \{ [X_j] \: : \; [X_j] \neq \Delta \mbox{ and } \# [X_j] \geq 2 \right \}_{1 \leq j \leq J} \, .$$ We remark that for a fixed $x$, the random hypergraphs ${\cal G}^{M , J , x}$ are defined simultaneously for all $M$ and $J$. In case it seems strange to take $V = {\mathbb R}^+$ instead of ${\mathbb Z}^+$, it is because we will be taking scaling limits. Some easy but useful estimates are as follows. \begin{proposition} \label{pr:counting} Fix $M > 1$ and $\eta > 0$. Let $J = \lfloor \eta J_0 \rfloor$ and let $[X_1] , [X_2] , \ldots$ and ${\cal G}$ denote the random variables on $(\Omega_x , {\cal F}_x , {\mathbb P}_x)$ constructed above. For any finite set $S$ of primes, let \[ N(S) = \{j : j \leq J; [X_j] = S\} \,.\] \begin{enumerate} \item For any finite set $S$ of primes in $(y , M y)$ with $|S| \geq 2$, the number $N (S)$ has asymptotic mean \begin{equation} \label{eq:means} {\mathbb E}_x N(S) \sim \eta \, \frac{y (\log y)^{|S|-1}}{\prod_{p \in S} p} \, . \end{equation} An upper bound, with an extra factor, is valid for all $S$: \begin{equation} \label{eq:extra factor} {\mathbb E}_x N(S) \leq 2^{|S|+1} \, \eta \, \frac{y (\log y)^{|S|-1}}{\prod_{p \in S} p} \, . \end{equation} \item For any set ${\cal W}$ of hyperedges $S$, let $N({\cal W}) := \sum_{S \in {\cal W}} N(S)$ denote the total number of hyperedges in ${\cal W}$. Then, for any ${\cal W}$, ${\mathbb P}_x (N({\cal W}) \geq 2) \leq ({\mathbb E}_x N({\cal W}))^2$. \item For any $p \in (y , M y)$, the probability that there will be a prime $q \neq p$ such that more than one hyperedge of ${\cal G}$ contains both $p$ and $q$ goes to zero uniformly in $M \leq M_0$, $\eta \leq \eta_0$ and $y < p , q \leq M y$. \end{enumerate} \end{proposition} \noindent{\bf Proof.} The means are computed by counting the number of $a \leq x$ with $[a] = S$. The number of integers of the form $s \prod_{p \in S} p$ up to $x$ where $s$ is $y$-smooth is $\psi (x / \prod_{p \in S} p , y)$. The number of integers of this form that are divisible by $q^2$ for some $q \in S$ is bounded above by $\displaystyle{\sum_{q \in S} \psi \left ( \frac{x}{q \prod_{p \in S} p} , y \right )}$. This is easily shown to be asymptotically negligible compared to $B_S := \displaystyle{\psi \left ( \frac{x}{\prod_{p \in S} p} , y \right )}$ by (\ref{2.7}), using the fact that $\alpha$ remains bounded away from zero, hence the number of $a \leq x$ with $[a] = S$ is asymptotically equal to $B_S$. By (\ref{eq:unif}), and using $\pi (y) \sim y / \log y$, we then have \begin{eqnarray*} {\mathbb E}_x N(S) & \sim & J \frac{\psi ( x / \prod_{p \in S} p , y)}{x} \\[1ex] & \sim & \eta \frac{y (\log y)^{|S|-1}}{\prod_{p \in S} p}\,, \end{eqnarray*} which is (\ref{eq:means}). Using (\ref{eq:unif unif}) instead of (\ref{eq:unif}), and $\pi (y) \leq 2 y / \log y$ instead of $\pi (y) \sim y / \log y$, gives (\ref{eq:extra factor}). The second statement follows because $N({\cal W})$ has a binomial distribution. For the third statement, let $H(p)$ denote the event that there is some $q$ for which more than one hyperedge arises containing $p$ and $q$. Fix any primes $p_1 \neq p_2$. Let ${\cal W}_k$ denote the set of sets $S = \{ p_1 , p_2 , \ldots , p_k \}$ of distinct primes between $y$ and $My$ and let ${\cal W} = \cup_{k \geq 2} {\cal W}_k$. By the second statement of this proposition, an upper bound for $H(p_1)$ may be obtained by summing any upper bound for $({\mathbb E}_x N({\cal W}))^2$ as $p_2$ ranges over primes between $y$ and $My$. We compute this by bounding ${\mathbb E}_x N({\cal W}_k)$, then summing over $k$, squaring, and summing over $p_2$. Thus we begin by using (\ref{eq:extra factor}) with $S = \{ p_1 , p_2 , \ldots , p_k \}$ to obtain $${\mathbb E} N(S) \leq 2^{k+1} \eta y (\log y)^{k-1} \prod_{p \in S} \frac{1}{p} \, .$$ Summing this over all choices of $p_3 , \ldots , p_k$ and using (\ref{eq:PNT 2}) for the last inequality then gives \begin{eqnarray*} {\mathbb E} N({\cal W}_k) & \leq & \frac{2^{k+1} \eta \, y \log y}{p_1 p_2} \sum_{p_3 < \cdots < p_k} \prod_{j=3}^k \frac{\log y}{p_j} \\[1ex] & \leq & \frac{2^{k+1} \eta \, y \log y}{p_1 p_2} \frac{1}{(k-2)!} \sum_{p_3 , \cdots , p_k} \prod_{j=3}^k \frac{\log y}{p_j} \\[1ex] & \leq & \frac{2^{k+1} \eta \, y \log y}{p_1 p_2} \frac{1}{(k-2)!} \prod_{j=3}^k (2 \log M) \, . \end{eqnarray*} We sum this over all integers $k\geq 3$ so that $${\mathbb E} N({\cal W}) \leq \frac{8 M^4 \eta y \log y}{p_1 p_2} \leq \frac{8 M_0^4 \eta_0 \log y}{ p_2}\, ,$$ since $y/p_1<1$. Squaring, noting that $1/p_2<1/y$ and $\log y<\log p_2$, we obtain a quantity bounded above by a constant multiple of $$\frac{\log y}{y} \sum_{y < p_2 \leq My} \frac{\log y}{p_2}$$ By~(\ref{eq:PNT 1}) this is $\displaystyle{O(\frac{\log y}{y})}$; this completes the proof, as we only needed to show $o(1)$. $\hfill \Box$ We now define sub-hypergraphs ${\cal G}_{m,p}$ of the random hypergraph ${\cal G}$, culled so as to be tree-like and rooted at $p$. They are deterministic functions of the variables $X_1 , \ldots , X_{J}$, and they will bear witness to the creation of pseudo-smooth numbers. They depend on the parameters $M , J, x$ and $k$, which are fixed throughout the construction and suppressed in the notation. We remark that the definition makes sense for $k = \infty$. \begin{defn}[The sub-hypergraph ${\cal G}_{m,p}$ and marked set $U_{m,p}$] \label{def:G} We define hypergraphs ${\cal G}_{m,p} (j)$ recursively for $m \geq 1$ and $1 \leq j \leq J$ as follows. \begin{itemize} \item Let $T_0 (p) := \{ p \}$ and ${\cal G}_{0,p} := \emptyset$, taking ${\rm supp}\, ({\cal G}_{0,p}) = \{ p \}$ by convention. \item For each $m \geq 1$, define ${\cal G}_{m,p} (0) := {\cal G}_{m-1,p}$. For $j \geq 1$, define ${\cal G}_{m,p} (j) := {\cal G}_{m,p} (j-1) \cup \{ [X_j] \}$ if $[X_j]$ intersects ${\rm supp}\, ({\cal G}_{m,p} (j-1))$ in a single element of $T_{m-1} (p)$ and $2 \leq |[X_j]| \leq k$. Otherwise, let ${\cal G}_{m,p} (j) := {\cal G}_{m,p} (j-1)$. Define ${\cal G}_{m,p} := {\cal G}_{m,p} (J)$. Define $T_m (p) := {\rm supp}\, ({\cal G}_{m,p}) \setminus {\rm supp}\, ({\cal G}_{m-1,p})$. \end{itemize} Let $U$ denote the set of primes $q$ with $y < q < M y$ such that $[X_j] = q$ for some $j \leq J$. Let $U_{m,p} := U \cap {\rm supp}\, ({\cal G}_{m,p})$. Then $({\cal G}_{m,p} , U_{m,p})$ is a marked sub-hypergraph, which we will use later to witness the creation of pseudo-smooths. \end{defn} Informally, ${\cal G}_{1,p}$ takes all hyperedges of ${\cal G}$ that contain $p$ except for those creating a collision (that is, a cycle on hyperedges), using the order in which they were generated to settle collisions. Then, ${\cal G}_{2,p}$ starts over, taking all hyperedges containing each of the vertices added in the previous step, except for those that cause collisions. In the end, the list of hyperedges is swept through, in order, $m$ times. The informal interpretation of $T_m (p)$ is the set of primes that first appear at distance $m$ from $p$ in our tree-like hypergraph; the informal interpretation of $U_{m,p}$ is the set of primes within distance $m$ of $p$ that appear as hyperedges of cardinality one. \begin{lemma} \label{lem:moments} For any $\eta, M, x$ and $p$, $${\mathbb E}_x \left | {\cal G}_{1,p} \right | \leq (2M - 1) \eta \frac{y}{p} \, .$$ \end{lemma} \noindent{\bf Proof.} By construction, the hypergraph ${\cal G}_{1,p}$ is a subset of the restriction of ${\cal G}$ to hyperedges containing $p$. Therefore, $${\mathbb E}_x |{\cal G}_{1,p}| \leq \sum_S {\mathbb E}_x N(S)$$ where the sum is over such sets $S$. Break down the sum by the cardinality of $S$. The sum over $|S| = k$ is $1 / (k-1)!$ times the sum over ordered sets of primes $p = p_1 , p_2 , \ldots , p_k$ in the range $(y , M y)$. The sum over ordered such sets is bounded above by the sum over ordered $k$-tuples in which repetition is allowed. Thus $${\mathbb E}_x |{\cal G}_{1,p}| \leq \sum_{k \geq 2} \frac{1}{(k-1)!} \sum_{p_2 , \ldots , p_k} {\mathbb E}_x N(p , p_2 , \ldots , p_k)$$ where the summand is zero, by convention, if there is a repetition. When there is no repetition, we obtain an estimate from (\ref{eq:means}), which implies the upper bound $${\mathbb E}_x |{\cal G}_{1,p}| \leq \sum_{k \geq 2} \frac{1}{(k-1)!} \sum_{p_2 , \ldots , p_k} \eta \frac{y}{p} \prod_{i=2}^k \frac{\log y}{p_i} \, .$$ The inner sum factors as a power, yielding $${\mathbb E}_x |{\cal G}_{1,p}| \leq \eta \frac{y}{p} \, \sum_{k \geq 2} \frac{1}{(k-1)!} \left ( \sum_{y < q < M y} \frac{\log y}{q} \right )^{k-1} \, .$$ By the prime number theorem, $\sum_{y < q < M y} (\log y)/q \to \log M$, and is never more than $\log (2M)$, whence $${\mathbb E}_x |{\cal G}_{1,p}| \leq \eta \frac{y}{p} \, \sum_{k \geq 2} \frac{(\log (2M))^{k-1}}{(k-1)!} = (2M - 1) \eta \frac{y}{p} \, . $$ $\hfill \Box$ \begin{corollary} \label{cor:size} $${\mathbb E}_x |{\cal G}_{1,p}|^2 \leq {\mathbb E}_x |{\cal G}_{1,p}| + \left ( {\mathbb E}_x |{\cal G}_{1,p}| \right )^2$$ and $${\mathbb E}_x |{\cal G}_{m,p}| \leq \left (1 + 2 \eta M \right )^m \frac{y}{p} \, .$$ \end{corollary} \noindent{\bf Proof.} For the first statement, note that for $S \neq T$, the events $\{ S \in {\cal G}_{1,p} \}$ and $\{ T \in {\cal G}_{1,p} \}$ are negatively correlated. (Recall that two events are negatively correlated, if the probability of their conjunction is {\em at most} the product of the probabilities of the events.) This is because the events $\{ [X_i] = S \}$ and $\{ [X_j] = T \}$ are independent, unless $i=j$, in which case they are negatively correlated. It follows that \begin{eqnarray*} {\mathbb E}_x |{\cal G}_{1,p}|^2 & \leq & \sum_{S,T} {\mathbb P}_x \left ( S , T \in {\cal G}_{1,p} \right ) \\ & \leq & \left ( {\mathbb E}_x |{\cal G}_{1,p}| \right )^2 + {\mathbb E}_x |{\cal G}_{1,p}| \, . \end{eqnarray*} \noindent For the second statment, induct on $m$. Conditional on ${\cal G}_{m-1,p}$, the random hypergraph ${\cal G}_{m,p}$ is stochastically dominated by the union of ${\cal G}_{m-1,p}$ with a collection of hyperedges whose conditional distribution given ${\cal G}_{m-1,p}$ is described as follows: for each $q \in T_{m-1} (p)$, and for each finite subset $S$ of primes in $(y , My)$ containing $q$, the hyperedge $S$ is added independently with probaiblity $N(S)$. By induction, the mean number of such $q$ is at most $(1 + 2 \eta M)^{m-1} y / p$. Bounding the mean of each Poisson variable from above by $2 \eta M$, we complete the induction. $\hfill \Box$ \ The number of pseudo-smooths generated by time $j$, by definition, is the difference between $j$ and the ${\mathbb F}_2$-rank of the collection $[X_1] , \ldots , [X_j]$, made into a ${\mathbb F}_2$-vector space by using the symmetric difference operation $[X_i] {\oplus} [X_j]$. To count this, we count the number of $j$ for which $[X_j]$ is in the ${\oplus}$-span of $[X_1] , \ldots , [X_{j-1}]$, which we denote by $\langle [X_1] , \ldots , [X_{j-1}] \rangle$. This includes the case where $[X_j] = \emptyset$ ($y$-smooth numbers), $[X_j] = [X_i]$ for some $i < j$ (the one large prime case), as well as more complicated cases. It turns out that not much is lost if we include only one more class of cases. For each prime $p$ in the interval $(y , M y)$, and each positive integer $j$, we define an event $\chi_{m,k,p}^{M,j}$ whose informal interpretation is that $\{ p \}$ is in the span of $\langle [X_1 , \ldots , [X_j] \}$ and that this fact is witnessed by classes $[X_i]$ of cardinality at most $k$, having indices $i \leq j$. A proposition immediately following the definition verifies the interpretation. The parameters $k,x,j$ and $M$ will now be fixed throughout the definition and suppressed from the notation. \begin{defn}[$\chi$ for general marked rooted trees]\label{def:chi-marked} ~\\ \vspace{-0.25in} \begin{enumerate} \item Let $(G , U)$ be any marked hypergraph rooted at a vertex $p$. For $q \in {\rm supp}\, (G)$, define the height $\ell (q)$ to be the length of the longest non-backtracking path from $q$ to the leaves of $G$, or more accurately, of the tree ${\bf T}_p (G)$. \item Define an event $\chi (q) = \chi (G,U,q)$ by recursion on $\ell (q)$. If $\ell (q) = 0$, define the event $\chi (q)$ to hold if and only $q \in U$. If $\ell (q) > 0$, let $r$ denote the distance from $p$ to $q$ in ${\bf T}_p (G)$ and define $\chi (q)$ to hold if and only if there is some hyperedge $S \in G$ such that~$(i)$~$S \subseteq T_{r+1} (p) \cup \{ q \}$ (that is, $S$ is a hyperedge that appears first at distance $r+1$ from $p$, and is a ``child'' of $q$), and~$(ii)$ the event $\chi (q')$ occurs for each $q' \in S$ other than $q$. \item Finally, let $\chi (G,U)$ denote the event $\chi (G,U) (p)$. \end{enumerate} \end{defn} \begin{unremarks} Note that the recursion is well founded because $\ell (q') \leq \ell (q) - 1$ for all such $q'$. Also note that in the recursive part of the definition, we allow $S$ to equal $\{ q \}$, in which case~$(ii)$ is vacuously satisfied. \end{unremarks} \begin{defn}[smooth primes witnessed in an $m$-neighborhood] \label{def:chi} \ \noindent If ${\cal G}_{m,p}$ is not tree-like, we define $\chi_{m,p}$ not to occur. If ${\cal G}_{m,p}$ is tree-like, we define $\chi_{m,p} (q) := \chi_{m,p} ({\cal G}_{m,p} , U_{m,p} , q)$, whence, $$\chi_{m,p} := \chi ({\cal G}_{m,p} , U_{m,p}, p) \, .$$ \end{defn} Let ${\bf V}$ denote the vector space over ${\mathbb F}_2$ whose basis is the set of symbols $$\{ \delta_p : p \mbox{ is a prime and } y < p < My \}.$$ Identify each class $[X]$ with the element $\sum_{p \in X} \delta_p$ of ${\bf V}$. In the following proposition, $\langle [X_1] , \ldots , [X_j] \rangle$ denotes the span of $\{ [X_1] , \ldots , [X_j] \}$ in ${\bf V}$. \begin{proposition} \label{pr:branching} For any $m \geq 1$, the event $\chi_{m,p} (q)$ implies $\{ q \} \in \langle [X_1] , \ldots , [X_j] \rangle$. In particular, $$\chi_{m,p} \; \Longrightarrow \; \{ p \} \in \langle [X_1] , \ldots , [X_j] \rangle \, .$$ \end{proposition} \noindent{\bf Proof.} By induction on $\ell (q)\geq 0$. If $\ell (q) = 0$ then $\chi_{m,p} (q)$ implies $[X_j] = \{ q \}$ for some $j \leq J$, which immediately implies $\{ q \} \in \langle [X_1] , \ldots , [X_j] \rangle$. Now suppose $\ell (q) \geq 1$. If $\chi_{m,p} (q)$ holds, let $j$ satisfy $(i)$ and $(ii)$ of the definition with $q=p$. For each $q' \in [X_j]$ distinct from $q$, $\ell (q') \leq \ell (q) - 1$, whence by induction, $\{ q' \} \in \langle [X_1] , \ldots , [X_j] \rangle$ for all such $q'$. This, along with the trivial observation that $[X_j] \in \langle [X_1] , \ldots , [X_j] \rangle$, implies $\{ q \} \in \langle [X_1] , \ldots , [X_j] \rangle$, which completes the induction. $\hfill \Box$ \ It follows from this that for any $m$, the number of linear dependences among \\ $\{ [X_1] , \ldots , [X_J] \}$ is bounded from below by \begin{equation} \label{eq:count} \# \{ j \leq J : \mbox{ for all } p \in [X_j], \mbox{ the singleton } \{ p \} \mbox{ is in the span } \langle [X_1] , \ldots , [X_{j-1}] \rangle \} \, . \end{equation} \subsection{Construction of the limit object ${\cal H}_m$} \label{ss:4.3} An informal description of the limit object is as follows. The root, $\rho$, gets hyperedges $\{ \rho , \rho_1 , \ldots , \rho_k \}$ independently, with the probability of such a hyperedge arising in a small volume element $\{ \rho \} \times [\rho_1 , \rho_1 + d\rho_1] \times \cdots \times [\rho_k , \rho_k + d\rho_k]$ equal to $$\frac{d\rho_1 \cdots d\rho_k}{\rho \rho_1 \cdot \rho_k} \, .$$ Recursively, for $m$ iterations, each vertex newly added in the last iteration gets new hyperedges in the same way. Formally, the limit object is best described in terms of Poisson processes. We briefly summarize definitions and properties of these, referring the reader to~\cite{poisson} for further details. Given a measure space $({\cal S} , {\cal B})$ with a $\sigma$-finite measure $\mu$, a Poisson process with intensity $\mu$ is a collection of random variables $\{ N(S) = N(S)(\omega) : S \in {\cal B} \}$ on some probability space $(\Omega , {\cal F} , {\mathbb P})$ satisfying the following properties: (1) Countable additivity in $S$: if ${\cal A}$ is a collection of disjoint elements of ${\cal B}$ then $N \left( \bigcup_{S \in {\cal A}} S\right) = \sum_{S \in {\cal A}} N(S)$; (2) Counting measure: $N(S)$ takes values in the nonnegative integers; (3) Poisson distribution: for fixed $S$, the random variable $N(S)$ is distributed as a Poisson distribution with mean $\mu (S)$; (4) Independence: if $S,T$ are disjoint elements of ${\cal B}$ then $N(S)$ and $N(T)$ are independent. \noindent A number of constructions are available to prove the existence of such a process. If $\mu$ is nonatomic, then with probability~1, the random counting measure $N$ gives measure at most~1 to every point $s \in {\cal S}$. It follows that the random measure $N(S)$ is the sum of point masses $\delta_s$, as $s$ ranges over some finite or countable subset of ${\cal S}$; we denote this set by ${\rm supp}\, (N)$ and refer to ${\rm supp}\, (N)$ as ``the points of the Poisson process''. The cardinality of ${\rm supp}\, (N)$ is a Poisson random variable with mean $\mu ({\cal S})$. Fix a real number $M > 1$. Fix also a real $\eta > 0$ and an integer $k \geq 2$. We construct a random hypergraph ${\cal H}_{m,p} = {\cal H}_{m,k,p}^{M,\eta}$ on a new probability space $(\Omega , {\cal F} , {\mathbb P})$ whose vertex set is the real interval $[1,M]$. The collection $[1,M]_j$ of subsets of $[1,M]$ of cardinality $j$ may be identified with the sector $W_j \subseteq {\mathbb R}^j$ defined by $$W_j := \{ (\rho_1 , \ldots , \rho_j) \in {\mathbb R}^j : 1 \leq \rho_1 < \cdots < \rho_j \leq M \} \, .$$ Let $d{\bf p} / (\rho_1 , \ldots , \rho_j)$ denote the image under this identification of the measure whose density with respect to Lebesgue measure is $1 / (\rho_1 \cdots \rho_j)$. Observe that the total mass of the measure $d{\bf p} / (\rho_1 \cdots \rho_j)$ is given $(\log M)^j / j!$. Now define a measure $\mu_k$ on the union $\bigcup_{j=1}^k [1,M]_j$ by $\mu_k = \sum_{j=1}^k d{\bf p} / (\rho_1 \cdots \rho_j)$. Let $\mu$ denote the increasing limit of $\mu_k$ as $k \to \infty$. We see that $\mu$ has finite total mass: $$||\mu|| \, = \, \sum_{j=2}^\infty \frac{(\log M)^j}{j!} \, = \, M - 1 - \log M \, .$$ Fix $\rho \in [1,M]$ and define an operation $\sigma_\rho$ by $\sigma_\rho (S) = S \cup \{ \rho \}$. Define the measure $\mu_k^{+\rho}$ by $\mu_k^{+\rho} = \mu_k \circ \sigma_\rho^{-1}$. In other words, $\mu_k^{+\rho}$ is the measure corresponding to ``choosing a set according to $\mu_k$'' and then adding the element $\rho$. (Here the quotes are to remind the reader that the finite measure $\mu_k$ is not a probability measure). Thus all the measures $\mu_k^{+\rho}$ as well as the increasing limit $\mu^{+\rho}$ are supported on finite sets of cardinality at least~2. Let $\tau \in [1,M]$ (here $\tau$ plays the role of $q/y$, just as $\rho$ plays the role of $p/y$). Let $\nu_\tau = \nu_{k,\tau}^{M ,\eta}$ (as usual, we suppress quantities that are, for the moment, fixed) be the law of the points of a Poisson process with intensity $\mu^{+\rho} / \tau$. Observe that each point of the process is a finite subset $S$ of $[1,M]$ with $\rho \in S$. Because the intensity measure has finite mass, the law of the set of points is the law of a random finite set of hyperedges $S \subseteq [1,M]$. By non-atomicity of Lebesgue measure, we see that with probability~1, this is a tree-like hypergraph rooted at $\rho$, all of whose hyperedges contain $\rho$. \begin{defn}[The marked graph $({\cal H}_{m,\rho} , \tilde{U}_{m,\rho})$] \label{def:hyp} We now construct the random hypergraphs ${\cal H}_{m,\rho} = {\cal H}_{m,k,\rho}^{M , \eta}$, by recursion on $m$. For $m=1$, choose ${\cal H}_{1,\rho}$ from the law $\nu_\rho$. For $m \geq 1$, let $T_{m,\rho} = {\rm supp}\, ({\cal H}_{m,\rho}) \setminus {\rm supp}\, ({\cal H}_{m-1,\rho})$, taking ${\rm supp}\, ({\cal H}_{0,\rho}) = \{ \rho \}$ by convention. For the recursion step, choose random hypergraphs ${\cal H}_{m , \tau}$ independently from respective laws $\nu_\tau$, as $\tau$ varies over $T_{m,\rho}$, and let ${\cal H}_{m+1,\rho}$ be the union of ${\cal H}_{m,\rho}$ with all the sets ${\cal H}_{m+1 , \tau} \,$. It is again immediate that each ${\cal H}_{m,\rho}$ is tree-like. Finally, we define a set of marks $\tilde{U}_{m,\rho}$\,, by choosing each $\tau \in {\rm supp}\, ({\cal H}_{m,\rho})$ independently, with probability $1 - e^{- \eta / \tau}$. \end{defn} Now, using Definition~\ref{def:chi} once more, define events \begin{eqnarray*} \chi_{m,\rho}' (\tau) & := & \chi ({\cal H}_{m,\rho} , \tilde{U}_{m,\rho} , \tau) \, ; \\ \chi_{m,\rho}' & := & \chi ({\cal H}_{m,\rho} , \tilde{U}_{m,\rho}) \, . \end{eqnarray*} These are events on the space $\Omega$ analogous to the events $\chi_{m,p} (q)$ and $\chi_{m,p}$ defined on the space $\Omega_x$. Denote $$\theta_m (\rho) := \theta_{m,k}^{M,\eta} (\rho) := {\mathbb P} (\chi_{m,\rho}') \, .$$ \subsection{Convergence of ${\cal G}$ to ${\cal H}$, and consequently, of ${\mathbb P}_x (\chi)$ to $\theta$} \label{ss:conv} \label{ss:4.4} In this subsection we prove convergence results which will be used to compute the rate of accumulation of pseudo-smooth numbers. \begin{theorem} \label{th:chi} Fix integers $m,k \geq 1$ and any real $M > 1$. Then \begin{equation} \label{eq:chi} {\mathbb P}_x (\chi_{m,k,p}^{M,j}) = (1 + o(1)) \theta_{m,k}^{M,j/J_0} (p/y) \end{equation} uniformly as $p$ varies over primes in the interval $(y , M y)$ and $j/J_0$ remains bounded. More generally, for any $r \geq 1$ and any $p_1 , \ldots , p_r$, \begin{equation} \label{eq:chi square} {\mathbb P}_x \left ( \bigcap_{i=1}^r \chi_{m,k,p_i}^{M,j} \right ) = (1 + o(1)) \prod_{i=1}^r \theta_{m,k}^{M,j / J_0} (p_i/y)\,, \end{equation} uniformly as $p_1 , \ldots , p_r$ vary over primes in the interval $(y , M y)$. \end{theorem} The proof of this theorem is essentially to show that the rescaled random graph $y^{-1} {\cal G}_{m,p}$ converges weakly to ${\cal H}_{m , p/y}$. We encapsulate what we need in the following lemmas. All of these are routine Poisson convergence lemmas. In each case, the lemmas hold for any fixed $k$, and with $k = \infty$ in the range of uniformity given for (\ref{eq:unif}). \begin{lemma} \label{lem:conv 1} As $x \to \infty$, the distance in the weak metric between the random hypergraph ${\cal G}_{1,p}^{M,j,x}$ and the random hypergraph ${\cal H}_{1,p/y}^{M,j/J_0}$ goes to zero, uniformly as $M$ and $j/J_0$ vary over bounded intervals and $y < p < M y$. \end{lemma} \noindent{\bf Proof.} As a preliminary computation, let ${\cal G}_{1,p}'$ denote the subset of ${\cal G}$ of all hyperedges containing $\{ p \}$. We claim that ${\mathbb P} ({\cal G}_{1,p} = {\cal G}_{1,p}') \to 1$. Indeed, the complementary event requires that a collision occur, entailing two hyperedges both to contain $\{ p \}$ and $\{ q \}$ for some $q$. By the last part of Proposition~\ref{pr:counting}, this probability goes to zero uniformly (and even for $k = \infty$ in the range allowed by using (\ref{eq:unif})). Next, let $\Xi = (\tau_1, \tau_1'] \times \cdots \times (\tau_n, \tau_n']$ be any rectangular subset of the sector $W_n$ and let $\Xi_x$ denote the set of sets, $S$, of $n$ primes, each between $y$ and $My$, such that $y^{-1} S \in \Xi$. As in Proposition~\ref{pr:counting}, let $N(\sigma_p (\Xi_x))$ denote the number of $j \leq J$ such that $[X_j] \in \sigma_p (\Xi_x)$. Using (\ref{eq:means}), we estimate \begin{eqnarray*} {\mathbb E}_x N(\sigma_p(\Xi_x)) & = & \sum_{S \in \sigma_p (\Xi_x)} {\mathbb E}_x N(S) \\[1ex] & \sim & \sum_{S \in \sigma_p (\Xi_x)} \eta \frac{y (\log y)^n} {p \, \prod_{q \in S} q} \, . \end{eqnarray*} Factoring the sum of products gives the equivalent expression $${\mathbb E}_x N(\sigma_p (\Xi_x)) \sim \eta \frac{y}{p} \prod_{i=1}^n \sum_{\tau_i y < q \leq \tau_i' y} \frac{\log y}{q} \, .$$ By the prime number theorem, this converges to $\nu_{p/y} (\Xi)$. Finally, let us see that $y^{-1} {\cal G}_{1,p}$ converges to a Poisson process with intensity $\nu_\rho$ where $\rho=p/y$; by construction, this is the distribution of ${\cal H}_{1,\rho}$, and therefore this will complete the proof of the lemma. We need to show that for any disjoint sets $\Xi^{(1)} , \ldots , \Xi^{(n)}$, the respective numbers $N^{(i)}$ of hyperedges in $y^{-1} {\cal G}_{1,p}$ in $\Xi^{(i)}$ converge in disribution to independent Poissons with means $\nu_\rho (\Xi_i)$. It suffices to prove this for ${\cal G}_{1,p}'$ in place of ${\cal G}_{1,p}$ because we have seen these are equal with probability $1 - o(1)$. We have already verified that the means are $\nu_\rho (\Xi^{(i)})$ when $\Xi^{(i)}$ are rectangles, which implies the same result for all measurable $\Xi$. To obtain the joint Poisson distribution, it is easiest to Poissonize. Replace ${\cal G}_{1,p}'$ by ${\cal G}_{1,p}''$, defined identically to ${\cal G}_{1,p}'$ except with $J$ replaced by a Poisson variable $J'$ of mean $J$. For this random graph, the numbers $(N^{(i)})''$ of hyperedges of ${\cal G}_{1,p}''$ in the rescaled $\Xi^{(i)}$ are exactly independent Poissons with the given means. The key observation is that $${\mathbb P}_x ({\cal G}_{1,p}' \neq {\cal G}_{1,p}'') = O(J_0^{-1/2}) \, .$$ To see this, note that ${\mathbb E}_x |J' - J| = O(\sqrt{J_0})$. Therefore, \begin{equation} \label{eq:poissonize} {\mathbb P}_x ({\cal G}_{1,p}' \neq {\cal G}_{1,p}'') = O \left ( \sqrt{J_0} \; {\mathbb P}_x (p \in [X_1]) \right ) = O \left ( J_0^{-1/2} \; {\mathbb E}_x |{\cal G}_{1,p}'| \right ) = O \left ( J_0^{-1/2} \right )\,, \end{equation} by Corollary~\ref{cor:size}. $\hfill \Box$ \begin{lemma} \label{lem:conv 2} As $x \to \infty$, the distance in the weak metric between the $n$-tuple of random hypergraphs $$y^{-1} \left ( {\cal G}_{1,p_i}^{M,j,x} \right )_{1 \leq i \leq n}$$ and the product of the laws of the hypergraphs ${\cal H}_{1,p_i/y}^{M,j/J_0}$ goes to zero, uniformly as $M$ and $j/J_0$ vary over bounded intervals and $y < p_i < M y$. \end{lemma} \noindent{\bf Proof.} This is the same proof with only one difference, as follows. To check that ${\cal G}_{1,p_i} = {\cal G}_{1,p_i}'$ with probability tending to~1, one observes that~(3) of Proposition~\ref{pr:counting} holds simultaneously for $p_1 , \ldots , p_n$. All else is the same, once one observes that Poissonization gives (\ref{eq:poissonize}) simultaneously for all $p_1 , \ldots , p_n$. $\hfill \Box$ \begin{lemma} \label{lem:conv 3} As $x \to \infty$, the distance in the weak metric between the random hypergraph $y^{-1} {\cal G}_{m,p}^{M,j,x}$ and the random hypergraph ${\cal H}_{m,p/y}^{M,j/J_0}$ goes to zero, uniformly as $M$ and $j/J_0$ vary over bounded intervals and $y < p < M y$. Similarly, the distance between the law of the random $n$-tuple $\displaystyle{y^{-1} ({\cal G}_{m,p_i}^{M,j,x})_{1 \leq i \leq n}}$ and the product of the laws of ${\cal H}_{m,p_i/y}^{M,j/J_0}$ goes to zero with the same uniformity in $M,j/J_0$ and $\{ p_i \}$. \end{lemma} \noindent{\bf Proof.} We induct on $m$. For $m=1$ this was shown in Lemma~\ref{lem:conv 1}. Now let $m \geq 2$ and assume for induction that the result holds for $m-1$. If ${\cal G}_{m,p}$ is tree-like, let $r := |T_1 (p)|$ and let $G_1 , \ldots , G_r$ denote the subtrees of ${\bf T}_p ({\cal G}_{m,p})$ from the vertices $q_1 , \ldots , q_r$ of $T_1 (p)$. Let ${\cal G} (1) , \ldots , {\cal G} (r)$ denote the corresponding hypergraphs, that is, ${\cal G} (i)$ is the hypergraph rooted at $q_i$ whose hyperedges are those of ${\cal G}_{m,p}$ whose support is a subset of the vertices of $G_i$. We will show that the joint conditional distribution of $y^{-1} ({\cal G} (1) , \ldots , {\cal G} (r))$ given ${\cal G}_{1,p}$ converges to the product of the laws of ${\cal H}_{m-1 , q_i/y}$. By the recursive construction of ${\cal H}_{m,p/y}$ and the fact that ${\cal G}_{m,p}$ is tree-like with probability approaching~1, this will complete the proof of the lemma. Consider the hypergraph ${\cal G}_{m-1 , q_i}'$. If this is tree-like, let $H_i$ be the subtree obtained by removing the unique hyperdege containing $p$ and $q_i$, and restricting to the connected component rooted at $q_i$. If these are disjoint for $1 \leq i \leq r$, then ${\cal G} (i) = H_i$ for each $i$. The probability that all the hypergraphs ${\cal G}_{m-1,q_i}'$ are tree-like is asymptotically~1. The probability of a collision is bounded above by $$\sum_{y < q < M y} \sum_{1,j=1}^r {\mathbb P}_x \left ( q \in {\rm supp}\, ({\cal G}_{m-1,q_i}' \cap {\rm supp}\, ({\cal G}_{m-1,q_j}' \right ) \, .$$ The probability that $q \in {\rm supp}\, ({\cal G}_{m-1,q_j})$, conditional on $|{\cal G}_{m-1,q_j}'|$, is $O(|{\cal G}_{m-1,q_j}'| / \pi (y))$. This is true as well for $q_i$, and the two events are independent. Therefore, the probability of a collision is $$O \left ( ({\mathbb E}_x r^2) \frac{({\mathbb E}_x |{\cal G}_{m-1,y}|)^2}{\pi (y)} \right ) \, .$$ By Corollary~\ref{cor:size}, we obtain the upper bound $O(1 / \pi (y))$. Next, we claim that the conditional distribution of $H_i$ given ${\cal G}_{1,p}'$ is asymptotically equal to the unconditional distribution of ${\cal G}_{m-1 , q_i}'$. Indeed, ${\cal G}_{1,p}'$ is measurable with respect to the $\sigma$-field generated by the events $\{ S \in {\cal G} : p \in S \}$. This is independent of the events $\{ S \in {\cal G} : p \notin S \}$, so conditional on ${\cal G}_{1,p}'$, $H_i$ has the distribution of ${\cal G}_{m-1,q_i}''$ where the double prime means that all hyperedges containing $p$ were excluded at every step of the construction. We already know that ${\cal G}_{m-1,q-1}''$ is asymptotically distributed as ${\cal G}_{m-1,q-1}'$, verifying the claim. Moreover, the same argument shows that the joint conditional law of $H_1 , \ldots , H_r)$ given ${\cal G}_{1,p}'$ is asymptotically the product of the laws for each $i \leq r$. Finally, by the induction hypothesis, the unconditional distribution of ${\cal G}_{m_i , q_i}'$ is asymptotically that of ${\cal H}_{m-1 , q_i/y}$. Therefore, since with probability approaching~1 all the graphs ${\cal G}_{m-1,q_i}'$ are tree-like and there are no collisions, we have shown what we need. $\hfill \Box$ \begin{lemma} \label{lem:conv 4} As $x \to \infty$, the distance in the weak metric between the random marked hypergraph $y^{-1} ({\cal G}_{m,p}^{M,j,x} , U_{m,p})$ and the random marked hypergraph $({\cal H}_{m,p/y}^{M,j/J_0} , \tilde{U}_{m,p/y})$ goes to zero, uniformly as $M$ and $j/J_0$ vary over bounded intervals and $y < p < M y$. More generally, the distance between an $n$-tuple of marked graphs $$y^{-1} \left ( {\cal G}_{m,p_i}^{M,j,x} , U_{m,p_i} \right )_{1 \leq i \leq n}$$ and the product of the laws of the random marked hypergraphs $({\cal H}_{m,p_i/y}^{M,j/J_0} , \tilde{U}_{m,p_i/y})$ goes to zero uniformly as $M$ and $j/J_0$ vary over bounded intervals and $y < p_1 , \ldots , p_n < M y$. \end{lemma} \noindent{\bf Proof.} Observe that the conditional probabilities of $q \in U_{m,p}$ given ${\cal G}_{m,p}$ are independent and given by $1 - e^{-\eta y / q}$ as $q$ varies over ${\rm supp}\, ({\cal G}_{m,p})$. This is true since, in the limit ($x,y \to \infty$ and $J = \eta x \pi(y)/\psi(x,y))$, the events $|\{j : [X_j] = \{q_i\}, j=1, \ldots , J\}|$ for fixed $q_1, q_2, \ldots , q_r$ are independent Poisson random variables with mean $\sim \eta y/q_i$. And once it is known, in the limit, that the events $\{q\in U_{m,p}\}$ given ${\cal G}_{m,p}$, with $q$ running over ${\rm supp}\,({\cal G}_{m,p})$ are independent with probability $1 - e^{-\eta y/q}$, then the first part of the lemma is proven; the second part is analogous. $\hfill \Box$ \noindent{\bf Proof of Theorem~\ref{th:chi}.} Begin with (\ref{eq:chi}). For any marked graph $(G,U)$, $\chi (G,U)$ depends only on the marked hypergraph structure of $(G,U)$ and not the names of the vertices. Because the topology on graph structure is discrete, $\chi$ is continuous. The weak topology on measure is characterized by convergence of integrals of bounded continuous functions, so (\ref{eq:chi}) follows from the first conclusion of Lemma~\ref{lem:conv 4}. For any fixed bounded continuous function, such as $\chi$, the difference in the integrals is bounded as a function of the distance bewteen the measures, whence the uniform convergence in Lemma~\ref{lem:conv 4} transfers to the required uniform convergence in (\ref{eq:chi}). The proof of (\ref{eq:chi square}) is identical, using the $n$-tuple convergence in Lemma~\ref{lem:conv 4} in place of convergence of the single marked hypergraph. $\hfill \Box$ \subsection{Computation of $\theta$} \label{ss:4.5} We begin by computing $\theta_m (\rho)$. Recall the definition of the functions $\gamma_{m,M,k} (u)$ in (\ref{eq:gamma}). \begin{lemma} \label{lem:theta} $$\theta_{m,k}^{M , \eta} (\rho) = 1 - e^{- \gamma_{m,M,k} (\eta) / \rho} \, .$$ \end{lemma} \noindent{\bf Proof.} The quantities $M, \eta$ and $k$ will be fixed throughout the proof, so we write $\theta_m$ for $\theta_{m,k}^{M , \eta}$. The proof is by induction on $m$. By definition, $1 - \theta_0 (\rho)$ is the probability that $\rho \notin \tilde{U}_{m,\rho}$, which is $e^{-\eta/\rho}$ by construction. This establishes the result for $m=0$. Now suppose that $m\geq 1$. The set of hyperedges $S \in {\cal H}_{1,\rho}$ is, by construction, a Poisson process with intensity $\nu_{\rho}$. The complement of $\chi_{m,\rho}$ is the intersection of $\rho \notin \tilde{U}_{m,\rho}$ with the event that for all hyperedges $S \in {\cal H}_{1,\rho}$ of cardinality between~2 and~$k$, there is some $\tau \in S\setminus \{ \rho \}$ that is not in $ \tilde{U}_{m,\rho}$. We have, by induction, \begin{eqnarray} 1 - \theta_{m+1} (\rho) & = & e^{-\eta/\rho}\ {\mathbb E} \left [ \prod_{S \in {\cal H}_{1,\rho}} \left ( 1 - \prod_{\tau \in S \setminus \{ \rho \}} \theta_{m} (\tau) \right ) \right ] \nonumber \\[1ex] & = & e^{-\eta/\rho}\ {\mathbb E} \left [ \prod_{S \in {\cal H}_{1,\rho}} \left ( 1 - \prod_{\tau \in S \setminus \{ \rho \}} \theta_{m} (\tau) \right ) \right ]\,, \label{eq:pois decomp} \end{eqnarray} where the first product is over hyperedges of cardinality up to~$k$ and the product over $\tau \in S \setminus \{ \rho \}$ is taken to be~1 if $S = \{ \rho \}$. If $f : \Xi \to [0,1]$ is any function on a space $\Xi$ on which is defined a Poisson process with intensity $\nu$, then the expected product of $f$ at points of the Poisson process is given by $$\exp \left [ \int (f(\xi) - 1) \, d\nu (\xi) \right ] \, .$$ Applying this to (\ref{eq:pois decomp}) with $\nu = \nu_\rho$ and $f (S) = 1 - \prod_{\tau \in S \setminus \{ \rho \}} \theta_{m} (\tau)$ gives $$\log (1 - \theta_{m+1} (\rho)) = - \frac{\eta}{\rho} - \int \prod_{\tau \in S \setminus \{ \rho \}} \theta_{m} (\tau) \; d\nu (S) \, .$$ Break up the integral according to $|S|$. Recall that for $j \geq 2$, the law of $S \setminus \{ \rho \}$ on $\{ |S| = j \}$ is $\eta \mu_{j-1} / \rho$. We may incorporate $-\eta / \rho$ as the $j=1$ term if we define $\mu_0$ to be a point mass of~1 at the empty set and the empty product to be~1. These substitutions yield $$\log (1 - \theta_{m+1} (\rho)) = - \frac{\eta}{\rho} \, \sum_{j'=0}^{k-1} \int \prod_{\tau \in S'} \theta_{m} (\tau) \; d\mu_{j'} (S) \, .$$ Here the primes are introduced to clarify the changes of variable $j' = j-1, S' = S \setminus \{ \rho \}$. We now drop the primes and observe that $\mu_j$ is $1/(j!)$ times a product measure. Therefore the integral of the product factors, yielding \begin{eqnarray*} \log (1 - \theta_{m+1} (\rho)) & = & - \frac{\eta}{\rho} \, \sum_{j=0}^{k-1} \frac{1}{j!} \left ( \int_1^M \theta_{m} (\tau) \frac{d\tau}{\tau} \right )^j \\[1ex] & = & - \frac{\eta}{\rho} \exp_k \left ( \int_1^M \theta_{m} (\tau) \frac{d\tau}{\tau} \right ) \, . \end{eqnarray*} Using the induction hypothesis again we substitute $1 - e^{- \gamma_{m,M,k} (\eta) / \tau}$ for $\theta_{m} (\tau)$ to arrive at $$\log (1 - \theta_{m+1} (\rho)) = - \frac{\eta}{\rho} \exp_k \left ( \int_1^M \left ( 1 - e^{- \gamma_{m,M,k} (\eta) / \tau} \right ) \frac{d\tau}{\tau} \right ) \, .$$ Changing variables to $t = 1/\tau$ so that $dt / t = - d\tau / \tau$, yields \begin{eqnarray*} \log (1 - \theta_{m+1} (\rho)) & = & - \frac{\eta}{\rho} \exp \left ( \int_{1/M}^1 \frac{1 - e^{- t \gamma_{m,M,k} (\eta)}}{t} \, dt \right ) \\ & = & - \frac{\eta}{\rho} A_M (\gamma_{m,M,k} (\eta)) \, . \end{eqnarray*} The right-hand side is equal to $- (1 / \rho) \gamma_{m+1,M,k} (\eta)$, completing the induction. $\hfill \Box$ \begin{lemma} \label{lem:theta converge} Fix any $\eta > \eta_*$. Then $$\theta_{m,k}^{M,\eta} (\rho) \to 1\,,$$ uniformly over $\rho$ in any bounded interval $[1,L]$ as $m , M , k \to \infty$. \end{lemma} \noindent{\bf Proof.} The function $z / \exp (A(z))$ is the real analytic function $$ \exp \left( \int_1^z \frac{ e^{-u}}{u} \, du - \int_0^1 \frac{1 - e^{-u}}{u} \, du\right) = \exp (- \gamma - \Gamma (0,z)) , $$ where $\Gamma (0,z):=\int_z^\infty e^{-t} \frac{dt}t$. By (\ref{gamma}), which evidently increases to $\eta_*$ as $z \uparrow \infty$. It follows that for $\eta > \eta_*$, if we choose any positive $\delta < (\eta / \eta_*) - 1$, then $$\frac{\eta}{1 + \delta} > \eta_* > \frac{z}{e^{A(z)}}\,,$$ which implies that $$\eta e^{A(z)} > (1 + \delta) z\,,$$ for all $z > 0$. Applying this to (\ref{eq:gamma}) with $z = \gamma_{m,\infty,\infty}(u)$ leads to $$\gamma_{m+1,\infty,\infty} (\eta) > (1 + \delta) \gamma_{m,\infty,\infty} (\eta)\,$$ which, in turn, leads inductively to $$\gamma_{m,\infty,\infty} (\eta) > \eta_* (1 + \delta)^{m-1} \, .$$ Since $\gamma$ is increasing in all its arguments, this is true for all greater $\eta$ as well. Now, given $L , \epsilon > 0$, choose $m$ sufficiently large so that $\gamma_{m,\infty,\infty} (\eta) > L \log(1/\epsilon)$. The function $\gamma$ is continuous in $M$ and $k$ at infinity, so we may choose $M$ and $k$ such that $\gamma_{m,M,k} (\eta) > L \log (1/\epsilon)$. It follows from Lemma~\ref{lem:theta} that $$\theta_{m,k}^{M,\eta} (\rho) = 1 - e^{- \gamma_{m,M,k} (\eta) / \rho} > 1 - e^{- \log (1/\epsilon)} = 1 - \epsilon\,,$$ for $1 \leq \rho \leq L$, proving the lemma. $\hfill \Box$ \subsection{Proof of main theorems} \label{ss:4.6} \noindent{\bf Proof of Theorem~\ref{main_theorem}.} Fix $\epsilon > 0$. The first step is to use Lemma~\ref{lem:theta converge} to pick $m,M,k$ such that $$\theta_{m,k}^{M , \eta_* + \epsilon} (\rho) > \frac{3}{4} \;\;\; \mbox{ for all } \;\; 1 \leq \rho \leq L := \exp \left ( \frac{3}{\epsilon} \right ) \, . $$ Take $M$ to be larger if necessary so that we may assume $M \geq L$. We deduce from the last displayed estimate with $\rho=p/y$ and from Theorem~\ref{th:chi} that, for any prime $p$ in the interval $(y,My)$ and for $x$ sufficiently large, we have $${\mathbb P}_x \left ( \chi_{m,p}^{M , (\eta_* + \epsilon) J_0} \right ) > \frac{3}{4} \, .$$ Now let $Y$ be the number of $j$ in the interval $I := [(\eta + \epsilon) J_0 , (\eta + 2 \epsilon) J_0]$ such that $[X_j] = \{ p \}$ for some prime $p$ with $y < p < M y$ and $\chi_{m,p}^{M,j-1}$ holds. Write $Y = \sum_{j \in I} Y_j$ where $Y_j$ is~1 if $[X_j] = \{ p \}$ for some prime $y < p < M y$ and zero otherwise. We compute a lower bound on ${\mathbb E}_x Y$ as follows. The event $\chi_{m,p}^{M,j-1}$ is independent of the event $[X_j] = \{ p \}$. By (\ref{eq:unif}) and the definition of $J_0$ we have $\psi (x/p , y) / \psi(x,y) \sim (\log y) / p$. Hence, \begin{eqnarray*} {\mathbb E}_x Y & = & \sum_{j \in I} \sum_{y < p < M y} {\mathbb P} ([X_j] = \{ p \} ) {\mathbb P} (\chi_{m,p}^{M,j-1}) \\[1ex] & = & \sum_{j \in I} \sum_{y < p < M y} \frac{\psi (x/p , y)}{x} {\mathbb P} (\chi_{m,p}^{M,j-1}) \\[1ex] & \geq & \frac{1}{2} \sum_{j \in I} \sum_{y < p < M y} \frac{\pi (y)}{J_0} \frac{\log y}{p} \end{eqnarray*} for $x$ sufficiently large. By the prime number theorem, $$\sum_{y < p < M y} (\log y) / p \sim \log M \geq \log L = 3 \epsilon^{-1} \, .$$ The outer sum has at least $\epsilon J_0$ terms, hence \begin{equation} \label{eq:first moment} {\mathbb E}_x Y \geq \frac{1}{2} (\epsilon J_0) \frac{\pi (y)}{J_0} (3 \epsilon^{-1}) = \frac{3}{2} \pi (y) \, . \end{equation} In Lemma~\ref{lem:2mm} below, we will prove the second moment bound $${\rm Cov}\, (Y_i , Y_j) = o \left ( \frac{\pi (y)^2}{J_0^2} \right ) \, .$$ Using, this lemma, \begin{eqnarray*} {\rm Var}\, (Y) & = & \sum_{i , j \in I} {\rm Cov}\, (Y_i , Y_j) \\ & \leq & {\mathbb E}_x Y + 2 \sum_{i,j \in I, i < j} {\rm Cov}\, (Y_i , Y_j) \\[2ex] & = & o(\pi (y)^2) \, . \end{eqnarray*} Together with (\ref{eq:first moment}), this implies that ${\mathbb P}_x (Y > \pi (y)) \to 1$. Recall from (\ref{eq:count}) that this implies more than $\pi (y)$ linear dependences among the classes $[X_j]$ with $j \leq (\eta_* + 2 \epsilon) J_0$. Since $\epsilon > 0$ was arbitrary, this completes the proof of the theorem, modulo the lemma. $\hfill \Box$ \noindent{\bf Proof of Theorem~\ref{th:quant}.} In the previous section, we chose $M$ to be absurdly large, which allowed us to use only those $j$ in the interval $[(\eta_* + \epsilon) J_0 , (\eta_* + 2 \epsilon) J_0]$. We can get much more reasonable values of $m,M$ and $k$ if we are willing to let $\eta$ be a little bigger and to use all the values of $j$ up to $\eta J$. The computations are in fact no harder (although the required convergence lemmas did involve more work in the previous sections). Fix $\eta , m , M$ and $k$ satisfying the inequality in the hypothesis of the theorem. Let $$Z := \sum_{j=1}^J Z_J := \# \left \{ j \leq J : \chi_{m,k,p}^{M , j-1} \mbox{ occurs for all } p \in [X_j] \right \} \, .$$ Again, Lemma~\ref{lem:2mm} implies ${\rm Var}\, (Z) = o(\pi (y)^2)$. If we are able to show \begin{equation} \label{eq:1mm b} \liminf_{x \to \infty} \frac{{\mathbb E}_x Z}{\pi (y)} > 1 \, , \end{equation} then we would have ${\mathbb P}_x (Z > \pi (y)) \to 1$, which would imply more than $\pi (y)$ linear dependences, thus establishing the theorem. To prove (\ref{eq:1mm b}), break down ${\mathbb E} Z_j$ according to the value of $[X_j]$ and using independence of $X_j$ from $\chi_{m,k,p}^{M,j-1}$. This gives \begin{eqnarray*} {\mathbb E}_x Z_j & = & \sum_S {\mathbb P}_x ([X_j] = S) {\mathbb P}_x (\chi_{m,k,p}^{M,j-1}) \\[1ex] & = & \sum_S \frac{\psi (x / \prod_{p \in S} p , y)}{x} {\mathbb P}_x (\chi_{m,k,p}^{M,j-1}) \\[1ex] & \sim & \sum_S \frac{(\log y)^{|S|}}{\prod_{p \in S} p} \frac{\psi (x,y)}{x} \prod_{p \in S} \theta_{m,k}^{M,j/J_0} (p / y) \, . \end{eqnarray*} The final equality above used both equation (\ref{eq:unif}) and the formula (\ref{eq:chi square}) of Theorem~\ref{th:chi}. Continuing, we use the identity $\psi (x,y) / x = \pi (y) / J_0$, factor out this term, and rewrite the summand as a product: $${\mathbb E} Z_j = \frac{\pi (y)}{J_0} \sum_S \prod_{p \in S} \left ( \frac{\log y}{p} \theta_{m,k}^{M,j/J_0} (p/y) \right ) \, .$$ Let $B$ be any set and $\{ z_p : p \in B \}$ be any positive real numbers with finite sum. Let ${\cal B}$ denote the set of finite subsets of $B$. Then $$\sum_{S \in {\cal B}} \prod_{p \in S} z_p = \prod_{p \in S} (1 + z_p) \to \exp (\sum_{p \in B} z_p)\,,$$ as $\max_{p \in B} z_p \to 0$. Using this identity, we obtain \begin{eqnarray*} {\mathbb E}_x Z_j & \sim & \frac{\pi (y)}{J_0} \exp \left ( \frac{1}{y} \sum_{y < p < M y} \frac{\log y}{p/y} \theta_{m,k}^{M,j/J_0} (p/y) \right ) \\[1ex] & \sim & \frac{\pi (y)}{J_0} \exp \left ( \int_1^M \frac{1}{t} \theta_{m,k}^{M,j/J_0} (t) \, dt \right )\,, \end{eqnarray*} by the prime number theorem. The asymptotic equivalence is uniform in $j \leq \eta J_0$. Summing from $j=1$ to $\eta J_0$ now gives \begin{eqnarray*} \frac{{\mathbb E}_x Z}{\pi (y)} & \sim & \int_0^\eta \exp \left ( \int_1^M \frac{1}{t} \theta_{m,k}^{M,u} (t) \, dt \right ) \; du \\[1ex] & = & \int_0^\eta \frac{\gamma_{m+1} (u)}{u} \, du \, . \end{eqnarray*} By the hypothesized inequality, the right-hand side is greater than~1, which establishes (\ref{eq:1mm b}) and completes the proof of the theorem. $\hfill \Box$ \begin{lemma} \label{lem:2mm} Fix a finite real $M > 1$ and $\eta > 0$ and an integer $m \geq 1$. Fix $1 \leq k \leq \infty$. Then $${\rm Cov}\, (Z_i , Z_j) = o \left ( \frac{\pi (y)^2}{J_0^2} \right )$$ for all $1 \leq i < j \leq \eta J_0$. The same is true with ${\rm Cov}\, (Y_i , Y_j)$ in place of ${\rm Cov}\, (Z_i , Z_j)$. \end{lemma} \noindent{\bf Proof.} Both arguments are the same, so we prove this just for ${\rm Cov}\, (Z_i , Z_j)$. It suffices to show that $${\mathbb E}_x (Z_i \cdot Z_j) \sim ({\mathbb E}_x Z_i) \cdot ({\mathbb E}_x Z_j) \, ,$$ uniformly for $1 \leq i < j \leq J$. Conditioning on $[X_i]$ and $[X_j]$, we see that this is the expectation of $${\mathbb E}_x(Z_i | [X_i] , [X_j]) \cdot {\mathbb E}_x (Z_j | [X_i] , [X_j]) \, . $$ The sets $[X_i]$ and $[X_j]$ are disjoint with probability going to~1, so it suffices to show that ${\mathbb E}_x(Z_i | [X_i] , [X_j])$ and ${\mathbb E}_x (Z_j | [X_i] , [X_j])$ are asymptotically independent when $[X_i]$ and $[X_j]$ are disjoint. We have seen in Lemma~\ref{lem:conv 3} that the collection of hypergraphs ${\cal G}_{m,k,p}^{M,i-1,x}$ for $p \in [X_i]$ and ${\cal G}_{m,k,p}^{M,j-1,x}$ for $p \in [X_j]$ are disjoint and tree-like with probability going to~1, and asymptotically independent. The same is true of the marked hypergraphs, by Lemma~\ref{lem:conv 4}. Since $Z_i$ is a bounded function of $[X_i]$ and the marked hypergraphs $({\cal G}_{m,k,p}^{M,i-1,x} , U_{m,k,p}^{M,i-1,x})$ for $p \in [X_i]$, and likewise for $[Z_j]$, we have the desired conditional independence. $\hfill \Box$ \section{Implications for Factoring Algorithms} \label{Algorithms} In factoring algorithms we need to find a linear dependence mod 2 in our matrix of exponents. We expect that the best algorithms known, due to Wiedemann or Lanczos (see section 6.1.3 of \cite{CP}), take time $$ \sim C \frac{y^2}{\log y\log\log y} $$ for a positive constant $C$, when we use the primes up to $y$ in our ``factor base''. If we were to take $y=y_0$ then this number would be far larger than $J_0$ and so would dominate the running time of the algorithm. Hence, to optimize, we select $y=y_1$, which is far smaller, chosen to equalize the running times of the two main parts of the algorithm, so that \begin{equation} \label{optimize} c \frac{\pi(y)}{\Psi(x,y)/x} \sim \frac{y^2}{\log y\log\log y} \end{equation} for an appropriate constant $c>0$. One can show that one then has $$ y_1 = y_0^{1 - (1+o(1))/\log\log x} , $$ with expected running time $$ J_0\ y_0^{(1+o(1))/(\log\log x)^2} $$ (see \cite{CGPT}). The proofs in the previous section work, as well for $y_1$, as for $y_0$. In particular we can determine the speed-up for various choices of the parameters (though always with $m=\infty$, see \cite{CGPT} for more details): \vskip .25in \centerline{\begin{tabular}{| l l | c c | c c | c c | c c | } \hline &\ \ $k$ &&$M=\infty$ && $M=100$ && $M=10$ \\ \hline & $0$ && 1 && 1 && 1 \\ & $1$ && .7499 && .7517 && .7677\\ & $2$ && .6415 && .6448 && .6745\\ & $3$ && .5962 && .6011 && .6422\\ & $4$ && .5764 && .5823 && .6324\\ & $5$ && .567 && .575 && .630\\ \hline \end{tabular}} \bigskip \centerline{\tt The value of $\eta$ such that there are $\sim \pi(y)$ } \centerline{\tt pseudosmooths amongst the $a_j$ with $j\leq \eta \pi(y)x/\Psi(x,y)$.} \vskip .35in So what effect will this reduction in the number of $a_j$ examined have in the actual running time? Suppose that we replace $c$ in (\ref{optimize}) by $\eta c$, and determine that the new running time is given by (\ref{optimize}), after solving (\ref{optimize}) to determine $y=y_\eta$. Now finding this solution is tantamount to finding a solution to $h(u_\eta)=\log (c\eta\log\log y)$ where $h(u):= \frac 1u \log x+\log \rho(u)$. We have $h'(u)= -1-(1+o(1))/\log u)$ and so $u_1-u_\eta=\log \eta (1-(1+o(1))/\log u)$. Our running time therefore changes by a factor of \begin{eqnarray*} \sim x^{\frac 2{u_\eta}-\frac 2{u_1}} & = & \exp \left( \frac {2(u_1-u_\eta)\log x}{u_1u_\eta} \right) = \exp \left( \frac {2\log \eta \log x}{u_1^2} \left(1-\frac{1+o(1)}{\log u} \right) \right) \\ & = & \exp \left( \log \eta ( \log\log x + \log\log\log x-\log 2-4+o(1)) \right) \\ & = & \left( \frac {2e^4+o(1)}{ \log x \log\log x } \right)^{ \log (1/\eta)}\,, \end{eqnarray*} since $\log^2 y_1 = \log^2 L(x) \left( 1 + \frac{\log\log\log x-\log 2-4+o(1)}{\log\log x} \right)$. \ Data on the effect of large prime variations that has been gathered from running factoring algorithms, seems rather different from what we have obtained here. One reason for this is that, in our analysis, the variations in $M$ and $k$ simply affect the number of $a_j$ being considered, whereas in reality these affect not only the number of $a_j$ being considered, but also several other important quantities. For instance, the amount of sieving that needs to be done, and also the amount of data that needs to be ``swapped'' (typically one saves the $a_j$ with several large prime factors to the disk, or somewhere else suitable for a lot of data). It is an interesting problem to try to properly analyze the construction of programs, so as to incorporate the results that we have obtained and to get predictions that would help the choice of parameters in computer algorithms.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,619
COVID-19: Halt vaccine booster shots until end of year, head of World Health Organisation says Tedros Adhanom Ghebreyesus says he is "appalled" by comments from pharmaceutical manufacturers who claim vaccine supplies are high enough to allow for both booster jabs and vaccinations in countries in need of shots. Samuel Osborne News reporter @samuelosborne93 Wednesday 8 September 2021 18:09, UK Image: A decision is imminent on who is likely to get booster jabs in the UK this autumn. File pic Wealthier countries with large supplies of coronavirus vaccines should refrain from offering booster shots until the end of the year, the head of the World Health Organisation has said. Tedros Adhanom Ghebreyesus said he was "appalled" by comments from pharmaceutical manufacturers who claim coronavirus vaccine supplies are high enough to allow for both third jabs, and vaccinations in countries in need of shots. "I will not stay silent when companies and countries that control the global supply of vaccines think the world's poor should be satisfied with leftovers," the WHO director-general said. Image: The WHO chief previously called for a "moratorium" on booster shots until the end of September, but wealthier countries like the UK are considering giving third jabs to vulnerable people He previously called for a "moratorium" on COVID booster shots until the end of September, but wealthy countries including the UK, US, France, Germany and Spain have either begun or are considering giving booster jabs to vulnerable people. Mr Tedros said he received a message of "clear support" from health ministers at a meeting of the influential Group of 20 countries this month for a commitment to help hit a WHO target for all countries to vaccinate at least 40% of their people by the end of the year. "A month ago, I called for a global moratorium on booster doses, at least until the end of September to prioritise vaccinating the most at risk people around the world who are yet to receive their first dose," Mr Tedros said. "There has been little change in the global situation since then." "So today, I'm calling for an extension of the moratorium until at least the end of the year to enable every country to vaccinate at least 40% of its population," he said. COVID-19: Children 'becoming fussy eaters' after getting coronavirus, experts suggest COVID-19: Loss of smell due to coronavirus could be down to genetics, scientists say COVID-19: Train passengers suffer one of the worst periods on record for cancellations The WHO says 5.5 billion jabs have been administered so far, but 80% of those have been in upper- and middle-income countries. While wealthier countries have offered to donate one billion doses to other countries, under 15% of those have "materialised," Mr Tedros said. "We don't want any more promises. We just want the vaccines," he added. Booster jab programme expected this month It comes as Health Secretary Sajid Javid said he is "confident" a booster programme can begin in the UK this month, saying he is awaiting advice on who should be eligible. Last week, Prime Minister Boris Johnson appeared to confirm a booster rollout will begin later this month, saying older people are the priority as autumn and winter approach. However, the Joint Committee on Vaccination and Immunisation (JCVI) is yet to provide a recommendation. Image: Booster jabs would work like the annual flu jab, which helps protect vulnerable people from getting the virus during the winter months Mr Javid said he is "very confident" there will be a booster programme, but told Sky News: "In terms of who actually gets it and when, we're waiting for final advice which could come across, certainly, in the next few days from the JCVI." He added: "I'm confident that we can start the booster programme this month." Earlier this week the vaccines minister, Nadhim Zahawi, told MPs a booster programme is "ready to go" as soon as the scientific advice for the scheme is signed off.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,021
{"url":"https:\/\/www.sawaal.com\/blood-relations-questions-and-answers\/pointing-out-to-a-lady-a-girl-said-she-is-the-daughter-in-law-of-the-grandmother-of-my-fathers-only-_2853","text":"884\nQ:\n\n# Pointing out to a lady, a girl said, \"She is the daughter-in-law of the grandmother of my father's only son.\" How is the lady related to the girl ?\n\n A) Sister-in-law B) Mother C) Aunt D) Can't be determined\n\nExplanation:\n\nGirls's father's only son\u2014 Girl's brother. Daughter in law of girl\u2019s grandmother can be their mother, or maternal uncle\u2019s wife, i.e. aunt. So relation cannot be determined.\n\nQ:\n\nAvant points at a picture and say \"She is the second daughter of my father's elder brother\". Avant and\u00a0the girl in the photograph are ..........\n\n A) brothers B) friends C) siblings D) cousins\n\nExplanation:\n\n4 221\nQ:\n\nThere is a family of five members: K, L, M, N and O. Among them, there is one married couple. O is unmarried and is the brother of K.\u00a0is the sister of O. M is the only married female and the mother of N. L and O are the only males in the group.\n\nWho is the father of K?\n\n A) O B) L C) M D) K\n\nExplanation:\n\n2 251\nQ:\n\nA + B\u00a0\u00a0means \u2018A is the mother of B\u2019;\n\nA - B\u00a0\u00a0means 'A is the husband of B';\n\nA x B\u00a0means 'A is son of B';\n\nA$\u00f7$B\u00a0means \u2018A is the daughter of B\u2019.\n\nIf,\u00a0, then how is\u00a0X related to Z?\n\n A) Daughter B) Wife C) Son D) Husband\n\nExplanation:\n\n4 411\nQ:\n\nPointing to the photograph of Sanchi, Nitin said, \"Her mother's father's son's wife is my mother-in-law's only daughter\". How is Nitin\u00a0related to Sanchi's mother ?\n\n A) Paternal uncle B) Paternal grandfather C) Maternal uncle D) Brother\n\nExplanation:\n\n6 458\nQ:\n\nA + B means \u2018A is the mother of B\u2019;\n\nA \u2212 B means \u2018A is the brother of B\u2019;\n\nA \u00d7 B means \u2018A is the father of B\u2019;\n\nA \u00f7 B means \u2018A is the daughter of B\u2019;\n\nIf, , then which of the following statements is NOT correct?\n\n A) J is daughter of P. B) P is paternal uncle of R. C) K is husband of S. D) Y is son of S.\n\nExplanation:\n\n5 277\nQ:\n\nA - B means 'A is the mother of B'\n\nA x B means 'A is the sister of B'\n\nA \/ B means 'A is the daughter of B'\n\nWhich of the following expressions means 'U is the daughter of Q'?\n\n A) Q - Z \/ U x K B) Q - Z x U \/ K C) K - Z \/ U x Q D) K - U \/ Z x Q\n\nAnswer & Explanation Answer: B) Q - Z x U \/ K\n\nExplanation:\n\n7 434\nQ:\n\nA + B means \u2018A is the husband of B\u2019;\n\nA - B means \u2018B is the sister of A\u2019;\n\nA X B means \u2018A is the mother of B\u2019;\n\nA \u00f7 B means \u2018A is the mother of B\u2019;\n\nIf,\u00a0$P+R\u00d7T-Q\u00f7S+U,$\u00a0then how is P related to S?\n\n A) Maternal grandfather B) Father-in-law C) Paternal grandfather D) Uncle\n\nExplanation:\n\n4 317\nQ:\n\nA + B means \u2018A is the father of B\u2019:\n\nA \u2212 B means \u2018A is the sister of B\u2019;\n\nA \u00d7 B means \u2018A is the brother of B\u2019;\n\nA \u00f7 B means \u2018A is the daughter of B\u2019.\n\nIf, $R+S\u00d7T-V\u00f7U$, then how is S related to U?\n\n A) Son B) Daughter C) Brother D) Husband","date":"2021-07-24 00:56:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 3, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5355218648910522, \"perplexity\": 7381.794273537593}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046150067.87\/warc\/CC-MAIN-20210724001211-20210724031211-00262.warc.gz\"}"}
null
null
Q: Questions about using the Background Intelligent Transfer Service I have a few questions in regards to using the Background Intelligent Transfer Service (BITS) as a scheme to add auto-updating capabilities to a windows application. 1 - If a user disables the Windows Update process, does Windows disables BITS? 2 - How does BITS interact with firewalls (hardware and software)? For example, I install a program FOO.exe, when I have it connect to the network, the Firewall prompts me, asking if I want to allow Foo.exe to access the network. Is BITS prompted in this manner, and if so, how is the prompt described? 3 - Can BITS be disabled/turned off via GROUP POLICY at the domain level? A: 1 - No. BITS is an external service which the Windows Update service DEPENDS on. 2 - BITS is an internal aspect of the OS, as such it is inherently trusted by software firewalls. 3 - Yes. BITS as an service is under the control of the GROUP POLICY for the domain.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,225
Beaumotte-lès-Pin (, literally Beaumotte near Pin) is a commune in the Haute-Saône department in the region of Bourgogne-Franche-Comté in eastern France. See also Communes of the Haute-Saône department References Communes of Haute-Saône
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,481
{"url":"https:\/\/iantayler.com\/2020\/10\/10\/nicole-doresmes-proof-of-the-divergence-of-the-harmonic-series\/","text":"# A 14th century proof of the divergence of the harmonic\u00a0series\n\nNicole d\u2019Oresme was a philosopher from 14th century France. He\u2019s credited for finding the first proof of the divergence of the harmonic series. In other words, he authored the first proof we know of for the fact that $1 + \\frac{1}{2} + \\frac{1}{3} + \\frac{1}{4} + ...$ is infinite.\n\nHis proof is very simple, so much so that I think probably someone with more talent for educating than me could probably teach it to middle school students. I think that\u2019s really cool.\n\nFormally, the sum we\u2019re talking about is the sum of $1\/k$ for all positive integers $k$. What d\u2019Oresme did was proving that for any number $C$, you can find a finite $k$ so that the sum of the first $k$ terms (i.e. $1 + \\frac{1}{2} + \\frac{1}{3} + ... + \\frac{1}{k-1} + \\frac{1}{k}$) in the original sum is larger than $C$. If the first $k$ terms sum to more than $C$, then the entire sum must be larger than $C$ as well (because there are no negative numbers in the sum). That means the sum is larger than any finite number and, therefore, it has to be infinite.\n\nBut how do you find the right $k$ that will make your sum large enough? Well, d\u2019Oresme used the following fact: if you take all the $1\/k$ numbers with $k>=n$ and $k<2n$, then you have $n$ numbers that are all larger than $\\frac{1}{2n}$. The sum of all $n$ of them must be larger than $\\frac{n}{2n} = \\frac{1}{2}$.\n\nThe above proves that the first number (i.e. with $k>=1$ and $k<2*1$) has to be larger than $\\frac{1}{2}$. From $2$ to $3$ (i.e. $k>=2,k<4$) the sum should also be larger than $\\frac{1}{2}$. The numbers from $4$ to $7$ also sum more than $\\frac{1}{2}$. In other words, the sum of numbers from any power of $2$ to the next power of $2$ should sum over $\\frac{1}{2}$. It\u2019s easy to see from this that the first $2^{2C}$ numbers in the series should sum something that\u2019s larger than $C$.\n\nIn other words, the $k$ such that the sum of the first $k$ terms is higher than $C$ is $k=2^{2C}$. And that, in a nutshell, is Nicole d\u2019Oresme\u2019s proof of the divergence of the harmonic series.\n\nNicole d\u2019Oresme studied a bunch of cool topics. I think he, as pretty much all other medieval scientists, is an under-appreciated historical figure. I like reading about medieval scientists and their works, so I might post other interesting things from them in the future.\n\nDo bear in mind that I\u2019m not a historian, so any historical commentary I include in my posts about medieval scientists should be taken with a grain of salt.","date":"2022-08-12 09:58:49","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 38, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8657088875770569, \"perplexity\": 118.15166535468478}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882571597.73\/warc\/CC-MAIN-20220812075544-20220812105544-00790.warc.gz\"}"}
null
null
← What is conversion therapy? A Complaint to the Equality and Human Rights Commission → Answering the conversion therapy consultation questions Posted on 31st October, 2021 by Clare Flourish under LGBT+, politics, trans A civil servant told me the point of a consultation was to quote the answers you liked, ignore the rest, and claim you were doing the will of the people. Another trick is to keep the questions so narrow you prevent answers you don't like, as here. The two problems are the ridiculous claim that conversion from cis to trans is a problem, which could be used to block treatment and support for trans people especially trans children, and the free pass for religious bigots. How to put those into the answers, with these questions? Truss's proposal to prevent treatment of trans people fits squarely under q16, and tangentially under q4 and q11. Religious bigots not getting away with it fits under qs 4 and 12. These are my answers and comments. You could use them as a template, or as a starting point. Please don't just copy them. Use your own words where possible, as answers which are mere copies will have less weight. It is worthwhile reading the consultation document before answering. There are also questions about personal experiences of conversion therapy, and I have described mine. I have now responded to the consultation. There is a limit to the emotional labour I can give to this. But I will keep this post updated if anyone has suggestions about how to respond. Please comment. The consultation runs until 10 December 2021. I was surprised when I saw the consultation pages. Not all the questions give a text box for answering. In particular, several questions on experience of conversion therapy give only multiple choice answers- I could say that it was imposed by "a person from a faith organisation or group", but not what happened. I could only tick a box saying it was "spiritual, religious or faith based activities (such as prayer healing)". Then they ask, "Do you agree or disagree that the Government should intervene to end conversion therapy in principle?" There was no text box, only multiple choice options from strongly agree to strongly disagree. "Why do you think this?" Because it has harmed me. However Liz Truss's suggestion that cis to trans conversion practices exist will make the proposals incredibly damaging to trans people. Genuine therapists seeking to help people explore their gender identity with no commitment either way will feel inhibited from helping trans people to realise their trans identity, in case they revert later under pressure of internal and external transphobia and complain about the therapist. In the consultation questions for everyone, again several only give multiple choice answers, from strongly agree to strongly disagree. The question numbers are from the consultation document, but are not repeated in the consultation itself. Q1. To what extent do you support, or not support, the Government's proposal for addressing physical acts of conversion therapy? Why do you think this? I wrote, Coupled with the suggestion that conversion from cis to trans is possible, is ever tried, and should be unlawful, the proposal of "physical acts" is deeply disturbing. There is intense pressure in British society against transition and in favour of detransition. Sometimes, people transition more than once- I am aware of people who have transitioned M-F-M-F-M, and F-M-F-M/nonbinary. People who detransition may be unable to accept that they have done so because of external pressure, and often complain against or sue those involved in their initial treatment. There is a risk that providing hormones or surgery to someone who claimed they were trans might be retrospectively construed as "physical acts of conversion therapy". Even if no legal action might take place, the threat of legal action could produce a significant chilling effect, making it more difficult for trans people to have the treatment we need. The answer to this problem is to drop the suggestion that anyone attempts to convert someone from cis to trans. There is no evidence for that whatsoever. Anyone prejudiced against LGB people is equally prejudiced against trans people. With the proposals' suggestion that religious attempts to convert LGBT+ people to straight and gender conforming should not be affected, the main "physical acts" which actually occur will remain lawful. They are the acts of religious bodies, in "healing" and "exorcism". I knelt before a priest who laid hands on my head and prayed over me to "heal" me. That was a physical act. It should be unlawful. I was twenty, but I was not capable of consenting to something so deeply harmful, just as I would not be capable of consenting to an assault. I wanted the treatment because my Christian upbringing had filled me with self-loathing and a desperate desire to be "normal". Freedom of religion should not include the freedom to harm others, even if those others do not realise at the time they are being harmed. Q2. The Government considers that delivering talking conversion therapy with the intention of changing a person's sexual orientation or changing them from being transgender or to being transgender either to someone who is under 18, or to someone who is 18 or over and who has not consented or lacks the capacity to do so should be considered a criminal offence. The consultation document describes proposals to introduce new criminal law that will capture this. How far do you agree or disagree with this? This question has only a multiple choice answer, from strongly agree to strongly disagree. It becomes meaningless: I "strongly disagree" with the proposals because religious bigots should not get excused. Someone else might strongly disagree because they think all trans are perverts and should be forcibly cured. I had drafted answers to questions 2 and 3 but there was no text box for either, so I put the drafted answers under question 4, "tell us what you think we have missed". That seems to say, "What else should we ban?" I was forced to argue they were going to ban things that should not be banned. Q3. How far do you agree or disagree with the penalties being proposed? This question only had multiple choice answers, from strongly agree to strongly disagree. The penalties are in paragraphs 42-44. They suggest magistrates could impose an unlimited fine or six months' imprisonment, and the Crown court five years' imprisonment. "Serious harm" to the victim would mean a Crown Court prosecution. Aggravating factors would include repetition, whether there was payment, the age of the victim and power dynamics between the coercer and victim. I don't desire severe punishment. I desire a clear offence, of religious or secular conversion from LGBT to cis/straight, which is likely to be prosecuted if it occurs, unlike rape. I desire no chance that anyone supporting LGBT people might fear being accused retrospectively of trying to convert them to LGBT. Q4. Do you think that these proposals miss anything? If yes, can you tell us what you think we have missed? At last, a text box. I wrote, This proposal seems to align with the High Court decision in Bell v Tavistock, which claimed that those under 16 could not consent to puberty blockers. However, the Court of Appeal restored the effect of the Gillick decision, that children could consent. Children can consent to treatment for gender dysphoria. The proposals would prevent such treatment, and should be changed. It should be recognised that no-one attempts to change someone from cis to trans, and no-one gives treatment necessary for a trans child or adult without being reasonably sure that the treatment is justified. Concerning lack of capacity to consent to trans treatment, this discriminates against mentally ill and disabled people. People of low intelligence might have their ability to consent to trans treatment questioned, but they understand because it is the treatment they need to align with their true nature. People suffering mental illness might have their ability to consent questioned, but the appalling stresses of living with the prejudice of British society against LGBT people may cause mental illness. The consultation has not defined conversion therapy. It has not said which practices might be criminal. A wide range of practices might be conversion practices- for example, preaching a sermon against homosexuality when there might be gay adults or children present. The following practices should be unlawful: Counselling by a professional, religious or lay person with the intent or effect of making an LGBT person celibate, straight, or conforming to the gender assigned at birth, whether an attempt to alter behaviour or underlying character. A religious person preaching a sermon or giving a talk saying that the religion prohibits gay sex, does not recognise transition, or demands conformity to particular gender roles, when there might be an LGBT or gender nonconforming person present. Any attempt to heal a person of being LGBT, or to exorcise a spirit making them LGBT. These are assaults, and it should not be possible to consent to them. However, any treatment given in good faith to help a person transition should always be lawful, even if that person later detransitions. There is no parallel whatsoever in these situations. They are completely different. You claim in para 44 that an adult should be able to consent to conversion therapy. I would have believed that I consented to being prayed over. I felt desire for it. I desired it because I had been taught to hate myself throughout childhood. Conversion therapy is a harm, and no-one can consent to it. This gives a loophole which will make the proposals worthless. It is not clear that the proposals will address the harm done by religious leaders teaching that LGBT is in any way wrong. Q5. The Government considers that Ofcom's Broadcasting Code already provides measures against the broadcast and promotion of conversion therapy. How far do you agree or disagree with this? Why do you think this? There is a text box. I did not answer this question. The Ofcom Code contains rules against offence and harm which would conceivably prevent promotion of conversion practices, but I have not studied it sufficiently to see what amendments might be necessary. Please put any suggestions of amendments, or further reading, in the comments. Q6. Do you know of any examples of broadcasting that you consider to be endorsing or promoting conversion therapy? If yes, can you tell us what these examples are? I am tempted to reply that broadcasting which fails to challenge transphobia might incite people to revert. I don't think such a reply will do any good. Q7. The Government considers that the existing codes set out by the Advertising Standards Authority and the Committee of Advertising Practice already prohibits the advertisement of conversion therapy. How far do you agree or disagree with this? There are only multiple choice options, from strongly agree to strongly disagree. I had a look at the ASA non-broadcast code. Again, it has provisions against harm and on health-related products. I don't have the expertise to assess whether amendments are necessary. I would welcome suggestions in the comments. I did not find them useful in restraining a hate campaign, but this is not the place to complain. Q8. Do you know of any examples of advertisements that you consider to be endorsing or promoting conversion therapy? If yes, can you tell us what these examples are? There is a text box, but I have no examples come to mind. Q9. The consultation document describes proposals to introduce conversion therapy protection orders to tackle a gap in provision for victims of the practice. To what extent do you agree or disagree that there is a gap in the provision for victims of conversion therapy? There are headings for the questions. The heading for questions 9-10 is "protecting people from conversion therapy overseas", but this heading appears misleading. It appears the proposed protection orders could protect someone from therapy in Britain. Q9 gives multiple choice options from strongly agree to strongly disagree. Q10. To what extent do you agree or disagree with our proposals for addressing the gap we have identified? Why do you think this? The proposals are at paragraphs 72-78 of the document. I wrote, The proposals would allow a local authority, or any other person with the court's permission- a friend, a teacher- to apply for a protection order so that no-one undergoes conversion therapy. Malicious persons could do a great deal of harm with this if the suggestion that cis to trans conversion therapy exists is put into law. Q11. Charity trustees are the people who are responsible for governing a charity and directing how it is managed and run. The consultation document describes proposals whereby anyone found guilty of carrying out conversion therapy will have the case against them for being disqualified from serving as a trustee at any charity strengthened. To what extent do you agree or disagree with this approach? Why do you think this? A charity trustee could be disqualified if they have declared bankruptcy, have convictions on crimes involving dishonesty, and a range of crimes including terrorism, bribery, perjury (which I would have thought involved dishonesty). Note the reference to "found guilty". This refers to criminal prosecution not civil penalties. I find myself suspicious because of the "cis to trans conversion" falsehood, but generally agree someone guilty of conversion therapy should not be a charity trustee. I have nothing specific to add. A charity can ask for a waiver, permitting someone with a conviction to serve as trustee: here is an account of such an application. In answer to the question, I wrote, I find myself suspicious because of the "cis to trans conversion" falsehood, but generally agree someone guilty of conversion therapy should not be a charity trustee. Q12. To what extent do you agree or disagree that the following organisations are providing adequate action against people who might already be carrying out conversion therapy? (Police; Crown Prosecution Service; OTHER statutory service)? Why do you think this? Q13. To what extent do you agree or disagree that the following organisations are providing adequate support for victims of conversion therapy? (Police; Crown Prosecution Service; OTHER statutory service)? Why do you think this? Here there is a text box for "Why do you think this", and multiple choice for each organisation, again from strongly agree to strongly disagree. Q14. Do you think that these services can do more to support victims of conversion therapy? If yes, what more do you think they could do? I wrote, "Clergy regularly damn LGBT people to hell. This is thought of as normal. Of course the police, CPS and statutory services do nothing about it." Economic appraisal Q15. Do you have any evidence on the economic or financial costs or benefits of any of the proposals set out in the consultation? If yes, please can you provide us with details of this evidence, including where possible, any references to publications? The proposal to make cis to trans conversion unlawful, by restricting support for trans people, will damage trans people's mental health and restrict the contribution we can make to society. Already, the failure to accept trans people in British society and the institutional, internalised and generalised transphobia damages us. A well-drafted conversion practices law could reduce that damage. Equalities impacts appraisal Q16. There is a duty on public authorities to consider or think about how their policies or decisions affect people who are protected under the Equality Act 2010. Do you have any evidence of the equalities impacts of any proposals set out in the consultation? The proposal to ban wholly mythical cis to trans conversion therapy is a deliberate wrecking ball aimed to destroy any value in the proposed legislation. Trans people are protected by the characteristic of gender reassignment. Cis people are not. There is no right to prevent discrimination in favour of trans people. This is because it does not exist. No-one attempts to convert someone from cis to trans, just as no gay people convert impressionable young people. Medical professionals act as gatekeepers, preventing trans people from progressing in treatment as fast as we would wish. If there is a law against converting someone from cis to trans, or from straight to gay, therapists helping trans people will be restrained by fear. What if their patient, under the incredible pressure of transphobic hatred everywhere in British society, reverts? What if they then deny they were ever trans? The result will be even less support and treatment for trans people. The proposal also adversely affects mentally disabled people. Even greater scrutiny will be placed on their ability to consent to trans-affirming treatment and their autonomy. Questions related to privacy Q17. Would you like your response to be treated as confidential? Q18. What is your email address? If you enter your email address then you will automatically receive an acknowledgement email when you submit your response. I await the report on the consultation. Art sometimes expresses human feelings beautifully: This entry was tagged consultation, conversion therapy. Bookmark the permalink. 4 thoughts on "Answering the conversion therapy consultation questions" 31st October, 2021 at 8:53 pm Wow, wow, wow. This is very disturbing. The recent report from Transgender Trust is chilling too. I will seek a response from my GIC as they should be contributing to the consultation in a very robust way; I wonder who will show the courage and leadership that I read in every one of your essays. Thank you Clare. In any decent version of this world you would be celebrated by those in authority (who seem to be doing their best to crush us) for all you are doing. Thank you, Alice. Cis to trans conversion practices are a completely ridiculous idea. Though, "forced feminisation" is offered by some dominatrices- perhaps the people who thought the idea up are so ashamed of their fantasies they want to make them illegal. Tories! Sue Richmond 8th November, 2021 at 9:52 pm Dear Clare, I've read your posts on conversion therapy and the UK consultation several times over the last week or so and wanted firstly to thank you for your careful analysis and consideration. I too am worried by the suggestion that cis to trans conversion might be deemed possible and the implications that has for young transitioners and for detransitioners. Secondly, I'd like to say how sorry I am to read of your own conversion therapy and the abusive way you were treated. You deserved better. I have now written on my own blog (https://suerichmond.blogspot.com/) about my own attempts at getting spiritual therapy to cure me of being trans. Not as dramatic or abusive as yours but it has taken a lot out of me to recall it. I will be writing about my reaction to the consultation itself in a few days but I am alerting my LGBT friends to it and its flaws. There's also the ladies toilets issue … sigh. I'm afraid the current political and media situation in Britain is going to worsen and trans peope are an easy target for groups that are normally mutually antagonistic. I have actually left the UK and part of me says it's no longer my problem but my revulsion for the current situation there is so intense that I feel I need to do all I can to fight it. Again, thanks for being so alert on issues like this. Sue, thank you for sharing that. It is a brave account. It seems the ministers you spoke to did not make a particular effort to make you cis- it was the whole religious experience, the teachings, the worship, the general need to conform. That's why religious teaching that LGBT is bad is so harmful, but the consultation has ruled out any legal action against that.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,326
<?php require 'auth.inc'; require 'guiconfig.inc'; $sphere_scriptname = basename(__FILE__); $sphere_header = 'Location: '.$sphere_scriptname; $sphere_array = []; $sphere_record = []; $checkbox_member_name = 'checkbox_member_array'; $checkbox_member_array = []; $checkbox_member_record = []; $gt_record_loc = gtext('Record is locked'); $img_path = [ 'add' => 'images/add.png', 'mod' => 'images/edit.png', 'del' => 'images/delete.png', 'loc' => 'images/locked.png', 'unl' => 'images/unlocked.png', 'mai' => 'images/maintain.png', 'inf' => 'images/info.png', 'ena' => 'images/status_enabled.png', 'dis' => 'images/status_disabled.png', 'mup' => 'images/up.png', 'mdn' => 'images/down.png' ]; $prerequisites_ok = true; function verify_filesystem_name($arg) { $returnvalue = false; switch ($arg) { // verify filesystem name default: // invalid parameter value break; case 'zfs': case 'softraid': case 'ufsgpt': // case 'ext2': case 'msdos': $returnvalue = true; break; } return $returnvalue; } $do_format = []; $a_control_matrix = [ 1 => [ 'zfs' => ['page' => 1,'filesystem' => 2,'minspace' => 0,'volumelabel' => 0,'aft4k' => 0,'zfsgpt' => 0,'notinitmbr' => 0], 'softraid' => ['page' => 1,'filesystem' => 2,'minspace' => 0,'volumelabel' => 0,'aft4k' => 0,'zfsgpt' => 0,'notinitmbr' => 0], 'ufsgpt' => ['page' => 1,'filesystem' => 2,'minspace' => 0,'volumelabel' => 0,'aft4k' => 0,'zfsgpt' => 0,'notinitmbr' => 0], 'ext2' => ['page' => 1,'filesystem' => 2,'minspace' => 0,'volumelabel' => 0,'aft4k' => 0,'zfsgpt' => 0,'notinitmbr' => 0], 'msdos' => ['page' => 1,'filesystem' => 2,'minspace' => 0,'volumelabel' => 0,'aft4k' => 0,'zfsgpt' => 0,'notinitmbr' => 0], 'default' => ['page' => 1,'filesystem' => 2,'minspace' => 0,'volumelabel' => 0,'aft4k' => 0,'zfsgpt' => 0,'notinitmbr' => 0], ], 2 => [ 'zfs' => ['page' => 2,'filesystem' => 1,'minspace' => 0,'volumelabel' => 2,'aft4k' => 0,'zfsgpt' => 2,'notinitmbr' => 2], 'softraid' => ['page' => 2,'filesystem' => 1,'minspace' => 0,'volumelabel' => 0,'aft4k' => 0,'zfsgpt' => 0,'notinitmbr' => 2], 'ufsgpt' => ['page' => 2,'filesystem' => 1,'minspace' => 2,'volumelabel' => 2,'aft4k' => 2,'zfsgpt' => 0,'notinitmbr' => 2], 'ext2' => ['page' => 2,'filesystem' => 1,'minspace' => 0,'volumelabel' => 2,'aft4k' => 0,'zfsgpt' => 0,'notinitmbr' => 2], 'msdos' => ['page' => 2,'filesystem' => 1,'minspace' => 0,'volumelabel' => 2,'aft4k' => 0,'zfsgpt' => 0,'notinitmbr' => 2], 'default' => ['page' => 1,'filesystem' => 2,'minspace' => 0,'volumelabel' => 0,'aft4k' => 0,'zfsgpt' => 0,'notinitmbr' => 0] ], 3 => [ 'zfs' => ['page' => 3,'filesystem' => 1,'minspace' => 0,'volumelabel' => 1,'aft4k' => 0,'zfsgpt' => 1,'notinitmbr' => 1], 'softraid' => ['page' => 3,'filesystem' => 1,'minspace' => 0,'volumelabel' => 0,'aft4k' => 0,'zfsgpt' => 0,'notinitmbr' => 1], 'ufsgpt' => ['page' => 3,'filesystem' => 1,'minspace' => 1,'volumelabel' => 1,'aft4k' => 1,'zfsgpt' => 0,'notinitmbr' => 1], 'ext2' => ['page' => 3,'filesystem' => 1,'minspace' => 0,'volumelabel' => 1,'aft4k' => 0,'zfsgpt' => 0,'notinitmbr' => 1], 'msdos' => ['page' => 3,'filesystem' => 1,'minspace' => 0,'volumelabel' => 1,'aft4k' => 0,'zfsgpt' => 0,'notinitmbr' => 1], 'default' => ['page' => 1,'filesystem' => 2,'minspace' => 0,'volumelabel' => 0,'aft4k' => 0,'zfsgpt' => 0,'notinitmbr' => 0] ], 4 => [ 'zfs' => ['page' => 4,'filesystem' => 1,'minspace' => 0,'volumelabel' => 1,'aft4k' => 0,'zfsgpt' => 1,'notinitmbr' => 1], 'softraid' => ['page' => 4,'filesystem' => 1,'minspace' => 0,'volumelabel' => 0,'aft4k' => 0,'zfsgpt' => 0,'notinitmbr' => 1], 'ufsgpt' => ['page' => 4,'filesystem' => 1,'minspace' => 1,'volumelabel' => 1,'aft4k' => 1,'zfsgpt' => 0,'notinitmbr' => 1], 'ext2' => ['page' => 4,'filesystem' => 1,'minspace' => 0,'volumelabel' => 1,'aft4k' => 0,'zfsgpt' => 0,'notinitmbr' => 1], 'msdos' => ['page' => 4,'filesystem' => 1,'minspace' => 0,'volumelabel' => 1,'aft4k' => 0,'zfsgpt' => 0,'notinitmbr' => 1], 'default' => ['page' => 1,'filesystem' => 2,'minspace' => 0,'volumelabel' => 0,'aft4k' => 0,'zfsgpt' => 0,'notinitmbr' => 0] ] ]; $a_button_matrix = [ 1 => ['submit_value' => gtext('Next' ),'submit_name' => 'action1','submit_control' => 2,'cancel_value' => gtext('Cancel'),'cancel_name' => 'cancel1','cancel_control' => 0,'checkbox_control' => 2], 2 => ['submit_value' => gtext('Next' ),'submit_name' => 'action2','submit_control' => 2,'cancel_value' => gtext('Back' ),'cancel_name' => 'cancel2','cancel_control' => 2,'checkbox_control' => 2], 3 => ['submit_value' => gtext('Format'),'submit_name' => 'action3','submit_control' => 2,'cancel_value' => gtext('Back' ),'cancel_name' => 'cancel3','cancel_control' => 2,'checkbox_control' => 1], 4 => ['submit_value' => gtext('OK' ),'submit_name' => 'action4','submit_control' => 2,'cancel_value' => gtext('Back' ),'cancel_name' => 'cancel4','cancel_control' => 0,'checkbox_control' => 1] ]; $l_filesystem = [ 'ufsgpt' => gtext('UFS (GPT and Soft Updates)'), 'msdos' => gtext('FAT32'), // 'ext2' => gtext('EXT2'), 'softraid' => gtext('Software RAID'), 'zfs' => gtext('ZFS Storage Pool') ]; $l_minspace = [ '8' => '8%', '7' => '7%', '6' => '6%', '5' => '5%', '4' => '4%', '3' => '3%', '2' => '2%', '1' => '1%' ]; $a_option = (isset($_POST) && is_array($_POST)) ? $_POST : []; if(isset($a_option['filesystem'])): // $a_option['filesystem'] = array_key_exists($a_option['filesystem'],$l_filesystem) ? $a_option['filesystem'] : 'zfs'; else: $a_option['filesystem'] = 'zfs'; endif; if(isset($a_option['checkbox_member_array']) && is_array($a_option['checkbox_member_array'])): else: $a_option['checkbox_member_array'] = []; endif; if(isset($a_option['volumelabel']) && preg_match('/\S/',$a_option['volumelabel'])): $a_option['volumelabel'] = htmlspecialchars(trim($a_option['volumelabel'])); else: $a_option['volumelabel'] = ''; endif; if(isset($a_option['minspace']) && array_key_exists($a_option['minspace'],$l_minspace)): else: $a_option['minspace'] = '8'; endif; $a_option['aft4k'] = isset($a_option['aft4k']); $a_option['zfsgpt'] = isset($a_option['zfsgpt']); $a_option['notinitmbr'] = isset($a_option['notinitmbr']); // Get OS partition $bootdevice = trim(file_get_contents("{$g['etc_path']}/cfdevice")); // Get list of all configured disks (physical and virtual). $sphere_array = get_conf_all_disks_list_filtered(); // Protect devices which are invalid or in use foreach($sphere_array as &$sphere_record): if(0 === strcmp($sphere_record['size'],'NA')): $sphere_record['protected'] = true; $sphere_record['protected.reason'] = gtext('Unknown size'); elseif(1 === disks_exists($sphere_record['devicespecialfile'])): $sphere_record['protected'] = true; $sphere_record['protected.reason'] = gtext('Device not found'); elseif(disks_ismounted_ex($sphere_record['devicespecialfile'],"devicespecialfile")): $sphere_record['protected'] = true; $sphere_record['protected.reason'] = gtext('Device is mounted'); elseif(1 === preg_match('~\A' . preg_quote($sphere_record['name'],'~') . '(\D+|\z)~',$bootdevice)): $sphere_record['protected'] = true; $sphere_record['protected.reason'] = gtext('Device contains boot partition'); else: $sphere_record['protected'] = false; $sphere_record['protected.reason'] = ''; endif; endforeach; unset($sphere_record); // release pass by reference // cleanup checkbox_member_array // Remove checkbox_member_array records which are protected in $sphere_array // Set enabled property in $sphere_array for those who can be selected $a_member_update = []; foreach($a_option['checkbox_member_array'] as $checkbox_member_record): if(false !== ($index = array_search_ex($checkbox_member_record,$sphere_array,'uuid'))): if(!$sphere_array[$index]['protected']): $sphere_array[$index]['enabled'] = true; $a_member_update[] = $checkbox_member_record; endif; endif; endforeach; $a_option['checkbox_member_array'] = $a_member_update; $page_index = 1; $a_control = $a_control_matrix[$page_index]['default']; $a_button = $a_button_matrix[$page_index]; if(isset($a_option['cancel1']) && $a_option['cancel1']): // cancel button has been pressed on page 1, we want to stay on page 1 elseif(isset($a_option['cancel2']) && $a_option['cancel2']): // back button has been pressed on page 2, return to page 1 elseif(isset($a_option['cancel3']) && $a_option['cancel3']): // back button has been pressed on page 3, return to page 2 if($prerequisites_ok): $page_index = 2; $a_control = $a_control_matrix[$page_index][$a_option['filesystem']]; $a_button = $a_button_matrix[$page_index]; endif; elseif(isset($a_option['cancel4']) && $a_option['cancel4']): // back button has been pressed on page 4, return to page 1 elseif(isset($a_option['action1']) && $a_option['action1']): // next button has been pressed on page 1, we want to display page 2 // expectation: filesystem has been chosen. if($prerequisites_ok): // verify filesystem type $prerequisites_ok = (isset($a_option['filesystem']) && verify_filesystem_name($a_option['filesystem'])); // filesystem type could be invalid, we need to return to page 1 to be able to select a valid filesystem. Nothing to do here because page 1 is set by default endif; if($prerequisites_ok): $page_index = 2; $a_control = $a_control_matrix[$page_index][$a_option['filesystem']]; $a_button = $a_button_matrix[$page_index]; endif; elseif(isset($a_option['action2']) && $a_option['action2']): // next button has been pressed on page 2, we want to display page 3 // expectation: filesystem has been chosen, disks have been selected. if($prerequisites_ok): // verify filesystem type $prerequisites_ok = (isset($a_option['filesystem']) && verify_filesystem_name(htmlspecialchars($a_option['filesystem']))); // filesystem type could be invalid, we need to return to page 1 to be able to select a valid filesystem. Nothing to do here because page 1 is set by default endif; if($prerequisites_ok): // verify selected disks if(false === ($prerequisites_ok = (isset($a_option['checkbox_member_array']) && is_array($a_option['checkbox_member_array']) && (count($a_option['checkbox_member_array']) > 0)))): // no disks selected, we stay on page 2 $page_index = 2; $a_control = $a_control_matrix[$page_index][$a_option['filesystem']]; $a_button = $a_button_matrix[$page_index]; endif; endif; if($prerequisites_ok): if(preg_match('/^(ufsgpt|msdos)/',$a_option['filesystem']) && preg_match('/\S/',$a_option['volumelabel'])): $helpinghand = preg_quote('[%', '/'); if(preg_match('/^[a-z\d' . $helpinghand . ']+$/i',$a_option['volumelabel'])): // additional check is required for adding serial number information to the label $label_serial = []; $label_serial['trigger'] = '['; $label_serial['match'] = '([1-9]\d?)'; $label_serial['regex'] = '/' . preg_quote($label_serial['trigger']) . $label_serial['match'] . '/'; $label_serial['count'] = substr_count($a_option['volumelabel'],$label_serial['trigger']); // count occurrences of the initiating character if($label_serial['count'] > 0): // one or more occurrences found? if($label_serial['count'] !== preg_match_all($label_serial['regex'],$a_option['volumelabel'])): // count must match, otherwise something went wrong $input_errors[] = sprintf(gtext("The attribute '%s' may only consist of the characters [a-z], [A-Z] and [0-9]."),gtext('Volume Label')); $prerequisites_ok = false; // invalid volume label pattern, we stay on page 2 $page_index = 2; $a_control = $a_control_matrix[$page_index][$a_option['filesystem']]; $a_button = $a_button_matrix[$page_index]; endif; endif; else: // invalid volume label pattern, we stay on page 2 $input_errors[] = sprintf(gtext("The attribute '%s' may only consist of the characters [a-z], [A-Z] and [0-9]."),gtext('Volume Label')); $prerequisites_ok = false; $page_index = 2; $a_control = $a_control_matrix[$page_index][$a_option['filesystem']]; $a_button = $a_button_matrix[$page_index]; endif; endif; endif; if($prerequisites_ok): if(preg_match('/^(zfs)/',$a_option['filesystem']) && preg_match('/\S/',$a_option['volumelabel'])): $helpinghand = preg_quote('[%.-_','/'); if(preg_match('/^[a-z\d' . $helpinghand . ']+$/i',$a_option['volumelabel'])): // additional check is required for adding serial number information to the label $label_serial = []; $label_serial['trigger'] = '['; $label_serial['match'] = '([1-9]\d?)'; $label_serial['regex'] = '/' . preg_quote($label_serial['trigger']) . $label_serial['match'] . '/'; $label_serial['count'] = substr_count($a_option['volumelabel'],$label_serial['trigger']); // count occurrences of the initiating character if($label_serial['count'] > 0): // one or more occurrences found? if($label_serial['count'] !== preg_match_all($label_serial['regex'],$a_option['volumelabel'])): // count must match, otherwise something went wrong $input_errors[] = sprintf(gtext("The attribute '%s' may only consist of the characters [a-z], [A-Z], [0-9] and [._-]."),gtext('Volume Label')); $prerequisites_ok = false; // invalid volume label defined, we stay on page 2 $page_index = 2; $a_control = $a_control_matrix[$page_index][$a_option['filesystem']]; $a_button = $a_button_matrix[$page_index]; endif; endif; else: // invalid volume label pattern, we stay on page 2 $input_errors[] = sprintf(gtext("The attribute '%s' may only consist of the characters [a-z], [A-Z], [0-9] and [._-]."),gtext('Volume Label')); $prerequisites_ok = false; $page_index = 2; $a_control = $a_control_matrix[$page_index][$a_option['filesystem']]; $a_button = $a_button_matrix[$page_index]; endif; endif; endif; if($prerequisites_ok): $page_index = 3; $a_control = $a_control_matrix[$page_index][$a_option['filesystem']]; $a_button = $a_button_matrix[$page_index]; endif; elseif(isset($a_option['action3']) && $a_option['action3']): // format button has been pressed on page 3, we want to format // expectation: filesystem has been chosen, disks have been selected, options have been set. if($prerequisites_ok): // verify filesystem type $prerequisites_ok = (isset($a_option['filesystem']) && verify_filesystem_name($a_option['filesystem'])); // filesystem type could be invalid, we need to return to page 1 to be able to select a valid filesystem. Nothing to do here because page 1 is set by default endif; if($prerequisites_ok): // verify selected disks if(false === ($prerequisites_ok = (isset($a_option['checkbox_member_array']) && is_array($a_option['checkbox_member_array']) && (count($a_option['checkbox_member_array']) > 0)))): // no disks selected, we need to return to page 2 to be able to select disks $page_index = 2; $a_control = $a_control_matrix[$page_index][$a_option['filesystem']]; $a_button = $a_button_matrix[$page_index]; endif; endif; if($prerequisites_ok): if(preg_match('/^(ufsgpt|msdos)/',$a_option['filesystem']) && preg_match('/\S/',$a_option['volumelabel'])): $helpinghand = preg_quote('[%','/'); if(preg_match('/^[a-z\d' . $helpinghand . ']+$/i',$a_option['volumelabel'])): // additional check is required for adding serial number information to the label $label_serial = []; $label_serial['trigger'] = '['; $label_serial['match'] = '([1-9]\d?)'; $label_serial['regex'] = '/' . preg_quote($label_serial['trigger']) . $label_serial['match'] . '/'; $label_serial['count'] = substr_count($a_option['volumelabel'],$label_serial['trigger']); // count occurrences of the initiating character if($label_serial['count'] > 0): // one or more occurrences found? if($label_serial['count'] !== preg_match_all($label_serial['regex'],$a_option['volumelabel'])): // count must match, otherwise something went wrong $input_errors[] = sprintf(gtext("The attribute '%s' may only consist of the characters [a-z], [A-Z] and [0-9]."),gtext('Volume Label')); $prerequisites_ok = false; // invalid volume label pattern, we stay on page 2 $page_index = 2; $a_control = $a_control_matrix[$page_index][$a_option['filesystem']]; $a_button = $a_button_matrix[$page_index]; endif; endif; else: // invalid volume label defined, we stay on page 3 $input_errors[] = sprintf(gtext("The attribute '%s' may only consist of the characters [a-z], [A-Z] and [0-9]."),gtext('Volume Label')); $prerequisites_ok = false; $page_index = 3; $a_control = $a_control_matrix[$page_index][$a_option['filesystem']]; $a_button = $a_button_matrix[$page_index]; endif; endif; endif; if($prerequisites_ok): if(preg_match('/^(zfs)/',$a_option['filesystem']) && preg_match('/\S/',$a_option['volumelabel'])): $helpinghand = preg_quote('[%.-_','/'); if(preg_match('/^[a-z\d' . $helpinghand . ']+$/i',$a_option['volumelabel'])): // additional check is required when adding serial number information to the label $label_serial = []; $label_serial['trigger'] = '['; $label_serial['match'] = '([1-9]\d?)'; $label_serial['regex'] = '/' . preg_quote($label_serial['trigger']) . $label_serial['match'] . '/'; $label_serial['count'] = substr_count($a_option['volumelabel'],$label_serial['trigger']); // count occurrences of the initiating character if($label_serial['count'] > 0): // one or more occurrences found? if($label_serial['count'] !== preg_match_all($label_serial['regex'],$a_option['volumelabel'])): // count must match, otherwise something went wrong $input_errors[] = sprintf(gtext("The attribute '%s' may only consist of the characters [a-z], [A-Z], [0-9] and [._-]."),gtext('Volume Label')); $prerequisites_ok = false; // invalid volume label pattern, we stay on page 2 $page_index = 2; $a_control = $a_control_matrix[$page_index][$a_option['filesystem']]; $a_button = $a_button_matrix[$page_index]; endif; endif; else: // invalid volume label defined, we stay on page 2 $input_errors[] = sprintf(gtext("The attribute '%s' may only consist of the characters [a-z], [A-Z], [0-9] and [._-]."),gtext('Volume Label')); $prerequisites_ok = false; $page_index = 2; $a_control = $a_control_matrix[$page_index][$a_option['filesystem']]; $a_button = $a_button_matrix[$page_index]; endif; endif; endif; if($prerequisites_ok): $page_index = 4 ; $a_control = $a_control_matrix[$page_index][$a_option['filesystem']]; $a_button = $a_button_matrix[$page_index]; // gather options and format selected disks $disk_options = []; $disk_options['zfsgpt'] = $a_option['zfsgpt'] ? 'p1' : ''; // set_conf_disk_fstype_opt knows how to deal with it if filesystem is not zfs // check for allowed characters, otherwise reset volumelabel $volumelabel_pattern = (preg_match('/(ufsgpt|msdos|zfs)/',$a_option['filesystem'])) ? $a_option['volumelabel'] : ''; // check if counters are part of the volume label $label_counter = []; if(preg_match('/\S/',$volumelabel_pattern)): // do we have a volumelabel pattern? $label_counter['trigger'] = '%'; $label_counter['match'] = '(\d*)'; $label_counter['regex'] = '/' . preg_quote($label_counter['trigger']) . $label_counter['match'] . '/'; $label_counter['count'] = substr_count($volumelabel_pattern,$label_counter['trigger']); // count occurrences of the initiating character if($label_counter['count'] > 0): // one or more occurrences found? if($label_counter['count'] === preg_match_all($label_counter['regex'],$volumelabel_pattern,$helpinghand)): // count must match, otherwise something went wrong $label_counter['needle'] = $helpinghand[0]; $label_counter['origin'] = $helpinghand[1]; $label_counter['replacement'] = []; $label_counter['pattern'] = []; for($i = 0; $i < $label_counter['count']; $i++): $label_counter['pattern'][$i] = '/' . preg_quote($label_counter['needle'][$i],'/') . '/'; // make regex pattern if(empty($label_counter['origin'][$i])): // using empty is ok $label_counter['replacement'][$i] = 0; // value of replacement if origin is empty else: $label_counter['replacement'][$i] = $label_counter['origin'][$i]; // value of replacement if origin is not empty (starting number) endif; endfor; else: $label_counter = []; $volumelabel_pattern = ''; endif; unset($helpinghand); else: $label_counter = []; endif; endif; // check if the drive's serial number is part of the volume label $label_serial = []; if(preg_match('/\S/',$volumelabel_pattern)): // do we have a volumelabel pattern? $label_serial['trigger'] = '['; $label_serial['match'] = '([1-9]\d?)'; $label_serial['regex'] = '/' . preg_quote($label_serial['trigger']) . $label_serial['match'] . '/'; $label_serial['count'] = substr_count($volumelabel_pattern,$label_serial['trigger']); // count occurrences of the initiating character if($label_serial['count'] > 0): // one or more occurrences found? if($label_serial['count'] === preg_match_all($label_serial['regex'],$volumelabel_pattern,$helpinghand)): // count must match, otherwise something went wrong $label_serial['needle'] = $helpinghand[0]; $label_serial['origin'] = $helpinghand[1]; $label_serial['replacement'] = []; $label_serial['pattern'] = []; for($i = 0; $i < $label_serial['count']; $i++): $label_serial['pattern'][$i] = '/' . preg_quote($label_serial['needle'][$i],'/') . '/'; // make regex pattern if(empty($label_serial['origin'][$i])): // using empty is ok $label_serial['replacement'][$i] = ''; // value of replacement if origin is empty else: $label_serial['replacement'][$i] = ''; // value of replacement if origin is not empty endif; endfor; else: $label_serial = []; $volumelabel_pattern = ''; endif; unset($helpinghand); else: $label_serial = []; endif; endif; foreach($a_option['checkbox_member_array'] as $checkbox_member_record): if(false !== ($index = array_search_ex($checkbox_member_record,$sphere_array,'uuid'))): if(!$sphere_array[$index]['protected']): set_conf_disk_fstype_opt($sphere_array[$index]['devicespecialfile'],$a_option['filesystem'],$disk_options); $volumelabel = $volumelabel_pattern; // apply counter to label if(!empty($label_counter)): $volumelabel = preg_replace($label_counter['pattern'],$label_counter['replacement'],$volumelabel,1); // increase counter; for($i = 0; $i < $label_counter['count']; $i++): $label_counter['replacement'][$i]++; endfor; endif; // apply serial number to label if(!empty($label_serial)): for($i = 0; $i < $label_serial['count']; $i++): if(false === ($label_serial['replacement'][$i] = substr($sphere_array[$index]['serial'],-$label_serial['origin'][$i],$label_serial['origin'][$i]))): $label_serial['replacement'][$i] = ''; endif; endfor; $volumelabel = preg_replace($label_serial['pattern'],$label_serial['replacement'],$volumelabel,1); endif; // prepare format $do_format[] = [ 'devicespecialfile' => $sphere_array[$index]['devicespecialfile'], 'filesystem' => $a_option['filesystem'], 'notinitmbr' => $a_option['notinitmbr'], 'minspace' => $a_option['minspace'], 'volumelabel' => $volumelabel, 'aft4k' => $a_option['aft4k'], 'zfsgpt' => $a_option['zfsgpt'] ]; endif; endif; endforeach; write_config(); endif; elseif(isset($a_option['action4']) && $a_option['action4']): // $page_index = 1; // $a_control = $a_control_matrix[$page_index][$a_option['filesystem']]; // $a_button = $a_button_matrix[$page_index]; endif; $pgtitle = [gtext('Disks'),gtext('Management'),gtext('HDD Format'),sprintf('%1$s %2$d',gtext('Step'),$page_index)]; ?> <?php include 'fbegin.inc'; ?> <script type="text/javascript"> //<![CDATA[ $(window).on("load", function() { // Init toggle checkbox $("#togglemembers").click(function() { togglecheckboxesbyname(this, "<?=$checkbox_member_name;?>[]"); }); // Init spinner onsubmit() $("#iform").submit(function() { spinner(); }); }); function togglecheckboxesbyname(ego, triggerbyname) { var a_trigger = document.getElementsByName(triggerbyname); var n_trigger = a_trigger.length; var i = 0; for (; i < n_trigger; i++) { if (a_trigger[i].type == 'checkbox') { if (!a_trigger[i].disabled) { a_trigger[i].checked = !a_trigger[i].checked; } } } if (ego.type == 'checkbox') { ego.checked = false; } } //]]> </script> <table id="area_navigator"><tbody> <tr><td class="tabnavtbl"><ul id="tabnav"> <li class="tabinact"><a href="disks_manage.php"><span><?=gtext('HDD Management');?></span></a></li> <li class="tabact"><a href="<?=$sphere_scriptname;?>" title="<?=gtext('Reload page');?>"><span><?=gtext('HDD Format');?></span></a></li> <li class="tabinact"><a href="disks_manage_smart.php"><span><?=gtext('S.M.A.R.T.');?></span></a></li> <li class="tabinact"><a href="disks_manage_iscsi.php"><span><?=gtext('iSCSI Initiator');?></span></a></li> </ul></td></tr> </tbody></table> <table id="area_data"><tbody><tr><td id="area_data_frame"><form action="<?=$sphere_scriptname;?>" method="post" id="iform" name="iform"> <?php if(!empty($input_errors)): print_input_errors($input_errors); endif; if(!empty($errormsg)): print_error_box($errormsg); endif; ?> <table class="area_data_settings"> <colgroup> <col class="area_data_settings_col_tag"> <col class="area_data_settings_col_data"> </colgroup> <thead> <?php html_titleline2(gtext('Format Options')); ?> </thead> <tbody> <?php switch($a_control['filesystem']): case 2: html_combobox2('filesystem',gtext('File System'),$a_option['filesystem'],$l_filesystem,gtext('Select file system format.'),true,false); break; case 1: html_combobox2('filesystem',gtext('File System'),$a_option['filesystem'],$l_filesystem,'',false,true); case 0: echo '<tr><td></td><td><input name="filesystem" type="hidden" value="',$a_option['filesystem'],'"/></td></tr>',"\n"; break; endswitch; switch($a_control['volumelabel']): case 2: html_inputbox2('volumelabel',gtext('Volume Label'),$a_option['volumelabel'],gtext('Volume label of the new file system. Use % for a counter or %n for a counter starting at number n, Use [n for the rightmost n characters of the device serial number.'),false,40,false); break; case 1: html_inputbox2('volumelabel',gtext('Volume Label'),$a_option['volumelabel'],'',false,100,true); break; case 0: echo '<tr><td></td><td><input name="volumelabel" type="hidden" value="',$a_option['volumelabel'],'"/></td></tr>',"\n"; break; endswitch; switch($a_control['minspace']): case 2: html_combobox2('minspace',gtext('Minimum Free Space'),$a_option['minspace'],$l_minspace,gtext('Specifiy the percentage of disk space to be held back from normal usage. Lowering this threshold can adversely affect performance and auto-defragmentation!'),true,false); break; case 1: html_combobox2('minspace',gtext('Minimum Free Space'),$a_option['minspace'],$l_minspace,'',false,true); case 0: echo '<tr><td></td><td><input name="minspace" type="hidden" value="',$a_option['minspace'],'"/></td></tr>',"\n"; break; endswitch; switch($a_control['aft4k']): case 2: html_checkbox2('aft4k',gtext('Advanced Format'),$a_option['aft4k'],gtext('Enable Advanced Format (4KB Sector Size).'),'',false,false); break; case 1: html_checkbox2('aft4k',gtext('Advanced Format'),$a_option['aft4k'],gtext('Enable Advanced Format (4KB Sector Size).'),'',false,true); case 0: if(true === $a_option['aft4k']): echo '<tr><td></td><td><input name="aft4k" type="hidden" value="yes"/></td></tr>',"\n"; endif; break; endswitch; switch($a_control['zfsgpt']): case 2: html_checkbox2('zfsgpt',gtext('GPT Partition'),$a_option['zfsgpt'],gtext('Create ZFS on a GPT partition.'),'',false,false); break; case 1: html_checkbox2('zfsgpt',gtext('GPT Partition'),$a_option['zfsgpt'],gtext('Create ZFS on a GPT partition.'),'',false,true); case 0: if(true === $a_option['zfsgpt']): echo '<tr><td></td><td><input name="zfsgpt" type="hidden" value="yes"/></td></tr>',"\n"; endif; break; endswitch; switch($a_control['notinitmbr']): case 2: html_checkbox2('notinitmbr',gtext('Erase MBR'),$a_option['notinitmbr'],gtext('Do not erase the Master Boot Record (useful for some RAID controller cards).'),'',false,false); break; case 1: html_checkbox2('notinitmbr',gtext('Erase MBR'),$a_option['notinitmbr'],gtext('Do not erase the Master Boot Record (useful for some RAID controller cards).'),'',false,true); case 0: if(true === $a_option['notinitmbr']): echo '<tr><td></td><td><input name="notinitmbr" type="hidden" value="yes"/></td></tr>',"\n"; endif; break; endswitch; ?> </tbody> <tfoot> <?php html_separator2(); ?> </tfoot> </table> <table class="area_data_selection"> <colgroup> <col style="width:5%"> <col style="width:10%"> <col style="width:15%"> <col style="width:10%"> <col style="width:15%"> <col style="width:15%"> <col style="width:20%"> <col style="width:10%"> </colgroup> <thead> <?php html_titleline2(gtext('Disk Selection'),8); ?> <tr> <?php switch ($a_button['checkbox_control']) { case 2: echo '<th class="lhelc"><input type="checkbox" id="togglemembers" name="togglemembers" title="',gtext('Invert Selection'),'"/></th>',"\n"; break; case 1: echo '<th class="lhelc"><input type="checkbox" id="togglemembers" name="togglemembers" title="',gtext('Invert Selection'),'" disabled="disabled"/></th>',"\n"; break; } ?> <th class="lhell"><?=gtext('Device');?></th> <th class="lhell"><?=gtext('Serial Number');?></th> <th class="lhell"><?=gtext('Size');?></th> <th class="lhell"><?=gtext('Path');?></th> <th class="lhell"><?=gtext('Filesystem');?></th> <th class="lhell"><?=gtext('Reason Code');?></th> <th class="lhebc"><?=gtext('Toolbox');?></th> </tr> </thead> <tbody> <?php foreach ($sphere_array as $sphere_record): ?> <tr> <?php $enabled = isset($sphere_record['enabled']); $notprotected = !$sphere_record['protected']; $tag_id = ' id="' . $sphere_record['uuid'] . '"'; $tag_name = ' name="' . $checkbox_member_name . '[]"'; $tag_value = ' value="' . $sphere_record['uuid'] . '"'; $tag_disabled = ' disabled="disabled"'; if(empty($sphere_record['fstype'])): $gt_fstype = gtext('Unknown or Unformatted'); else: $gt_fstype = htmlspecialchars(get_fstype_shortdesc($sphere_record['fstype'])); endif; if($notprotected): $tag_checked = $enabled ? ' checked="checked"' : ''; switch ($a_button['checkbox_control']): case 2: echo '<td class="lcelc"><input type="checkbox"',$tag_name,$tag_value,$tag_id,$tag_checked,'/></td>',"\n"; break; case 1: echo '<td class="lcelc"><input type="checkbox"',$tag_name,$tag_value,$tag_id,$tag_disabled,$tag_checked,'/></td>',"\n"; if($enabled): echo '<input type="hidden"',$tag_name,$tag_value,'/>',"\n"; endif; break; case 0: echo '<td></td>',"\n"; if($enabled): echo '<input type="hidden"',$tag_name,$tag_value,'/>',"\n"; endif; break; endswitch; else: echo '<td class="lcelcd"><input type="checkbox"',$tag_name,$tag_value,$tag_id,$tag_disabled,'/></td>',"\n"; endif; ?> <td class="<?=$notprotected ? 'lcell' : 'lcelld';?>"><?=htmlspecialchars($sphere_record['name']);?></td> <td class="<?=$notprotected ? 'lcell' : 'lcelld';?>"><?=htmlspecialchars($sphere_record['serial']);?></td> <td class="<?=$notprotected ? 'lcell' : 'lcelld';?>"><?=htmlspecialchars($sphere_record['size']);?></td> <td class="<?=$notprotected ? 'lcell' : 'lcelld';?>"><?=htmlspecialchars($sphere_record['devicespecialfile']);?></td> <td class="<?=$notprotected ? 'lcell' : 'lcelld';?>"><?=$gt_fstype;?></td> <td class="<?=$notprotected ? 'lcell' : 'lcelld';?>"><?=htmlspecialchars($sphere_record['protected.reason']);?></td> <td class="lcebld"><table class="area_data_selection_toolbox"><tbody><tr> <td> <?php if($notprotected): else: echo '<img src="',$img_path['loc'],'" title="',$gt_record_loc,'" alt="',$gt_record_loc . '"/>',"\n"; endif; ?> </td> <td></td> <td></td> </tr></tbody></table></td> </tr> <?php endforeach; ?> </tbody> </table> <div id="submit"> <?php switch($a_button['submit_control']): case 2: echo '<input type="submit" class="formbtn" name="',$a_button['submit_name'],'" value="',$a_button['submit_value'],'"/>',"\n"; break; case 1: echo '<input type="submit" class="formbtn" name="',$a_button['submit_name'],'" value="',$a_button['submit_value'],'" disabled="disabled"/>',"\n"; break; endswitch; switch($a_button['cancel_control']): case 2: echo '<input type="submit" class="formbtn" name="',$a_button['cancel_name'],'" value="',$a_button['cancel_value'],'"/>',"\n"; break; case 1: echo '<input type="submit" class="formbtn" name="',$a_button['cancel_name'],'" value="',$a_button['cancel_value'],'" disabled="disabled"/>',"\n"; break; endswitch; ?> </div> <?php if(count($do_format) > 0): foreach($do_format as $do_format_disk): echo(sprintf("<div id='cmdoutput'>%s</div>",sprintf(gtext("Command output") . " for disk %s :",$do_format_disk['devicespecialfile']))); echo('<pre class="cmdoutput">'); disks_format($do_format_disk['devicespecialfile'],$do_format_disk['filesystem'],$do_format_disk['notinitmbr'],$do_format_disk['minspace'],$do_format_disk['volumelabel'],$do_format_disk['aft4k'],$do_format_disk['zfsgpt']); echo('</pre><br/>'); endforeach; endif; ?> <?php include 'formend.inc'; ?> </form></td></tr></tbody></table> <?php include 'fend.inc'; ?>
{ "redpajama_set_name": "RedPajamaGithub" }
3,044
Огюст Шуази (; 7 февраля 1841, Витри-ле-Франсуа, Франция — 18 сентября 1909, Париж, Франция) — французский инженер, историк архитектуры и строительной техники. Биография Огюст Шуази родился в семье архитектора. В 1861 году поступил в Политехническую школу в Париже (École Polytechnique). Затем работал в Ведомстве мостов и дорог (Corps des ponts et chaussées). Он отправился с французской дипломатической миссией в Грецию, где, изучая античные памятники Афин, одним из первых отметил курватуру (кривизну) стилобата древнегреческих храмов. Этот факт неоднократно привлекал внимание исследователей с момента обретения Грецией независимости в 1835 году. Однако Шуази был первым, кто предположил, что кривизна античных построек позволяет компенсировать искажения, появляющиеся в процессе зрительного восприятия и представить горизонтальные и вертикальные линии и плоскости как прямые. В 1866 году Огюст Шуази закончил обучение в Политехнической школе и работу в Ведомстве мостов и дорог, и уехал в Италию. Его инженерное образование пригодилось стипендиатам Французской академии в Риме. По возвращении во Францию в 1868 году Шуази приступил к своим обязанностям инженера в резиденции Ретель, Шампань. Во время войны 1870 года он служил офицером инженерных войск; именно тогда он впервые встретил выдающегося историка и реставратора архитектуры Эжена Виолле-ле-Дюка. Последний приветствовал ещё не опубликованную работу Шуази по древнеримской архитектуре в сноске № 5 к своему «Толковому словарю французской архитектуры XI—XVI веков» (Dictionnaire raisonne de l'Architecture Francaise du XI au XVI siecle, 1875): «Молодой французский инженер месье Шуази скоро опубликует весьма полный труд о строении римских сводов, по памятникам. Этот сборник, который мы держим в руках, подробно описывает различные процессы, использовавшиеся этими великими строителями, и самым очевидным образом демонстрирует, что экономия расходов была одной из их главных забот. Мы призываем архитекторов, которые серьёзно хотят знать процессы, применяемые римлянами в строительстве, обратиться по этому вопросу к работе месье Шуази». В 1873 году Огюст Шуази опубликовал книгу «Строительное искусство римлян» (l'Art de bâtir chez les Romains), в которой продемонстрировал свой метод построения аналитических чертежей-реконструкций памятников в аксонометрической перспективе. Этот способ представления архитектурного сооружения останется его любимым. Он позволяет создать наглядное изображения здания одновременно с нескольких сторон, в плане, перспективе и проекциях. С 1876 года Огюст Шуази преподавал историю архитектуры: он был лектором в Школе мостов дорог, преподавал в Политехнической школе, а также в Школе садоводства в Версале (l'École d'horticulture de Versailles). Наряду с преподавательской деятельностью Шуази много путешествовал по Греции, Турции и Ближнему Востоку (1875). Он был назначен ответственным за проект транссахарской железной дороги — единственный проект в его карьере инженера-строителя дорог, — который привёл его в Северную Африку в 1880 году. В 1899 году Огюст Шуази опубликовал свой главный труд, обобщающий предыдущие работы: «Историю архитектуры» (l'Histoire de l'architecture). Этот двухтомный труд охватывает период от предыстории до XVIII века, включая не только западноевропейскую, но также архитектуру стран Ближнего, Среднего и Дальнего Востока. В начале ХХ века работы Огюста Шуази подвергали жёсткой, во многом справедливой критике, в основном за то, что он, будучи инженером-строителем подменял историю архитектуры историей строительных конструкций. Таковы, в частности, примечания и комментарии Н. И. Брунова, В. Д. Блаватского, Б. П. Денике и других к русскому изданию «Истории архитектуры», предпринятому Всесоюзной Академией архитектуры в 1935—1937 годах. Тем не менее, значение «Истории архитектуры» Шуази велико, оно было влиятельным на протяжении столетий, прежде всего за изложение технических подробностей строительного искусства. Особенно вдохновляли исследования Шуази сторонников структурного рационализма, Конструктивизм (искусство)архитекторов-конструктивистов и Функционализм (архитектура) функционалистов. «Несмотря на относительно узкую задачу эта книга полна тонких наблюдений, интересных фактов, остроумных заключений». В 1903 году Огюст Шуази был награжден Золотой медалью Королевского института британских архитекторов (RIBA). Он умер в 1909 году, так и не завершив своего критического и иллюстрированного издания трактата Витрувия «Десять книг об архитектуре». Основные труды Строительное искусство римлян (L'art de bâtir chez les Romains), 1873 Строительное искусство у византийцев (L'Art de bâtir chez les Byzantins), 1883 История архитектуры. В 2-х т. (Histoire de l'architecture), 1899. Русский перевод: 1910, 1935 Строительное искусство египтян (L'art de bâtir chez les Égyptiens), 1904 Примечания Литература Massimiliano Savorra, Una storia per gli ingegneri. Corrispondenze e continuita tra Leonce Reynaud, Fernand de Dartein e Auguste Choisy, in M. Lansberger (ed.) La lezione di Auguste Choisy, numero monografico di «Parametro», n. 255, gennaio-febbraio, 2005, с.40-45 Ссылки Огюст Шуази на архитектурном портале archi-story.ru Историки архитектуры Теоретики архитектуры
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,719
To open a coffee shop in the format of "coffee with you" on the franchise, budding entrepreneurs can be enough 150-200 thousand dollars start-up investment. How to promote a website to the top of Yandex and Google? What companies will help to promote the site in search engines? What determines the price of search engine optimization? Increasingly, passing in his garage cooperative, I see how the men are messing around all the time, and people gather around them.
{ "redpajama_set_name": "RedPajamaC4" }
8,710
// // UIView+CheckForUnsatisfiableConstraints.h // YXTUnsatisfiableConstraintsDetector // // #import <UIKit/UIKit.h> /** * UIView category to allow testing for unsatisfiable constraints. */ @interface UIView (CheckForUnsatisfiableConstraints) /** * Check for unsatisfiable constraints on this view that are mentioned in the provided message. * * @param message Console message to be tested. * * @return True if one or more of this view's constraints are listed in the message. */ - (BOOL)yxt_hasUnsatisfiableConstraintForMessage:(NSString *)message; @end
{ "redpajama_set_name": "RedPajamaGithub" }
4,030
Q: Python Beautiful Soup Web Scraping Transfermkt Arrays Not all Same Length I am looking to scrape data for teams over a period of years across countries from https://www.transfermarkt.com/premier-league/startseite/wettbewerb/GB1/plus/?saison_id=2019 This site is an example of what I am looking to scrape including the table with squad size, etc. in the middle of the page as well as the table with the match data on the right side of the page. I am using Beautiful Soup in python. Here is the code I have so far: import pandas as pd import numpy as np import requests import time from bs4 import BeautifulSoup import warnings warnings.filterwarnings('ignore') #create a dictionary for league and iterate over seasons dct_GB1 = {} dct_IT1 = {} dct_ES1 = {} dct_L1 = {} dct_FR1 = {} dct_PO1 = {} dct_NL1 = {} dct_TR1 = {} dct_BE1 = {} dct_UKR1 = {} dct_A1 = {} dct_GR1 = {} dct_TS1 = {} dct_SC1 = {} dct_KR1 = {} dct_C1 = {} dct_PL1 = {} dct_DK1 = {} dct_ER1 = {} dct_RO1 = {} dct_SE1 = {} dct_ZYP1 = {} dct_NO1 = {} dct_KAS1 = {} dct_UNG1 = {} dct_ISR1 = {} dct_BU1 = {} dct_WER1 = {} dct_SLO1 = {} dct_SL1 = {} dct_AZ1 = {} dct_BOS1 = {} dct_MAL1 = {} dct_ALB1 = {} dct_MAZ1 = {} dct_ARM1 = {} dct_GE1N = {} dct_FI1 = {} dct_MO1N = {} dct_LET1 = {} dct_MNE1 = {} dct_KO1 = {} dct_LUX1 = {} dct_LI1 = {} dct_EST1 = {} dct_IS1 = {} dct_WAL1 = {} dct_FARO = {} dct_AND1 = {} dct_IR1 = {} dct_NIR1 = {} dct_SMR1 = {} dct_GI1 = {} dct_GB2 = {} dct_ES2 = {} dct_IT2 = {} dct_FR2 = {} dct_L2 = {} dct_NL2 = {} dct_TR2 = {} dct_PO2 = {} dct_A2 = {} dct_C2 = {} dct_BE2 = {} dct_GRS2 = {} dct_RO2 = {} dct_PL2 = {} dct_UN2 = {} for m in range(2007,2020): dct_GB1['df_GB1_%s' % m] = pd.DataFrame() dct_IT1['df_IT1_%s' % m] = pd.DataFrame() dct_ES1['df_ES1_%s' % m] = pd.DataFrame() dct_L1['df_L1_%s' % m] = pd.DataFrame() dct_FR1['df_FR1_%s' % m] = pd.DataFrame() dct_PO1['df_PO1_%s' % m] = pd.DataFrame() dct_NL1['df_NL1_%s' % m] = pd.DataFrame() dct_TR1['df_TR1_%s' % m] = pd.DataFrame() dct_BE1['df_BE1_%s' % m] = pd.DataFrame() dct_UKR1['df_UKR1_%s' % m] = pd.DataFrame() dct_A1['df_A1_%s' % m] = pd.DataFrame() dct_GR1['df_GR1_%s' % m] = pd.DataFrame() dct_TS1['df_TS1_%s' % m] = pd.DataFrame() dct_SC1['df_SC1_%s' % m] = pd.DataFrame() dct_KR1['df_KR1_%s' % m] = pd.DataFrame() dct_C1['df_C1_%s' % m] = pd.DataFrame() dct_PL1['df_PL1_%s' % m] = pd.DataFrame() dct_DK1['df_DK1_%s' % m] = pd.DataFrame() dct_ER1['df_ER1_%s' % m] = pd.DataFrame() dct_RO1['df_RO1_%s' % m] = pd.DataFrame() dct_SE1['df_SE1_%s' % m] = pd.DataFrame() dct_ZYP1['df_ZYP1_%s' % m] = pd.DataFrame() dct_NO1['df_NO1_%s' % m] = pd.DataFrame() dct_KAS1['df_KAS1_%s' % m] = pd.DataFrame() dct_UNG1['df_UNG1_%s' % m] = pd.DataFrame() dct_ISR1['df_ISR1_%s' % m] = pd.DataFrame() dct_BU1['df_BU1_%s' % m] = pd.DataFrame() dct_WER1['df_WER1_%s' % m] = pd.DataFrame() dct_SLO1['df_SLO1_%s' % m] = pd.DataFrame() dct_SL1['df_SL1_%s' % m] = pd.DataFrame() dct_AZ1['df_AZ1_%s' % m] = pd.DataFrame() dct_BOS1['df_BOS1_%s' % m] = pd.DataFrame() dct_MAL1['df_MAL1_%s' % m] = pd.DataFrame() dct_ALB1['df_ALB1_%s' % m] = pd.DataFrame() dct_MAZ1['df_MAZ1_%s' % m] = pd.DataFrame() dct_ARM1['df_ARM1_%s' % m] = pd.DataFrame() dct_GE1N['df_GE1N_%s' % m] = pd.DataFrame() dct_FI1['df_FI1_%s' % m] = pd.DataFrame() dct_MO1N['df_MO1N_%s' % m] = pd.DataFrame() dct_LET1['df_LET1_%s' % m] = pd.DataFrame() dct_MNE1['df_MNE1_%s' % m] = pd.DataFrame() dct_KO1['df_KO1_%s' % m] = pd.DataFrame() dct_LUX1['df_LUX1_%s' % m] = pd.DataFrame() dct_LI1['df_LI1_%s' % m] = pd.DataFrame() dct_EST1['df_EST1_%s' % m] = pd.DataFrame() dct_IS1['df_IS1_%s' % m] = pd.DataFrame() dct_WAL1['df_WAL1_%s' % m] = pd.DataFrame() dct_FARO['df_FARO_%s' % m] = pd.DataFrame() dct_AND1['df_AND1_%s' % m] = pd.DataFrame() dct_IR1['df_IR1_%s' % m] = pd.DataFrame() dct_NIR1['df_NIR1_%s' % m] = pd.DataFrame() dct_SMR1['df_SMR1_%s' % m] = pd.DataFrame() dct_GI1['df_GI1_%s' % m] = pd.DataFrame() dct_GB2['df_GB2_%s' % m] = pd.DataFrame() dct_ES2['df_ES2_%s' % m] = pd.DataFrame() dct_IT2['df_IT2_%s' % m] = pd.DataFrame() dct_FR2['df_FR2_%s' % m] = pd.DataFrame() dct_L2['df_L2_%s' % m] = pd.DataFrame() dct_NL2['df_NL2_%s' % m] = pd.DataFrame() dct_TR2['df_TR2_%s' % m] = pd.DataFrame() dct_PO2['df_PO2_%s' % m] = pd.DataFrame() dct_A2['df_A2_%s' % m] = pd.DataFrame() dct_C2['df_C2_%s' % m] = pd.DataFrame() dct_BE2['df_BE2_%s' % m] = pd.DataFrame() dct_GRS2['df_GRS2_%s' % m] = pd.DataFrame() dct_RO2['df_RO2_%s' % m] = pd.DataFrame() dct_PL2['df_PL2_%s' % m] = pd.DataFrame() dct_UN2['df_UN2_%s' % m] = pd.DataFrame() #list of URL bases for each league league_urls = (['https://www.transfermarkt.com/premier-league/startseite/wettbewerb/GB1/plus/?saison_id=', 'https://www.transfermarkt.com/serie-a/startseite/wettbewerb/IT1/plus/?saison_id=', 'https://www.transfermarkt.com/laliga/startseite/wettbewerb/ES1/plus/?saison_id=', 'https://www.transfermarkt.com/bundesliga/startseite/wettbewerb/L1/plus/?saison_id=', 'https://www.transfermarkt.com/ligue-1/startseite/wettbewerb/FR1/plus/?saison_id=', 'https://www.transfermarkt.com/liga-nos/startseite/wettbewerb/PO1/plus/?saison_id=', 'https://www.transfermarkt.com/eredivisie/startseite/wettbewerb/NL1/plus/?saison_id=', 'https://www.transfermarkt.com/super-lig/startseite/wettbewerb/TR1/plus/?saison_id=', 'https://www.transfermarkt.com/jupiler-pro-league/startseite/wettbewerb/BE1/plus/?saison_id=', 'https://www.transfermarkt.com/premier-liga/startseite/wettbewerb/UKR1/plus/?saison_id=', 'https://www.transfermarkt.com/bundesliga/startseite/wettbewerb/A1/plus/?saison_id=', 'https://www.transfermarkt.com/super-league-1/startseite/wettbewerb/GR1/plus/?saison_id=', 'https://www.transfermarkt.com/fortuna-liga/startseite/wettbewerb/TS1/plus/?saison_id=', 'https://www.transfermarkt.com/scottish-premiership/startseite/wettbewerb/SC1/plus/?saison_id=', 'https://www.transfermarkt.com/1-hnl/startseite/wettbewerb/KR1/plus/?saison_id=', 'https://www.transfermarkt.com/super-league/startseite/wettbewerb/C1/plus/?saison_id=', 'https://www.transfermarkt.com/pko-ekstraklasa/startseite/wettbewerb/PL1/plus/?saison_id=', 'https://www.transfermarkt.com/superligaen/startseite/wettbewerb/DK1/plus/?saison_id=', 'https://www.transfermarkt.com/super-liga-srbije/startseite/wettbewerb/SER1/plus/?saison_id=', 'https://www.transfermarkt.com/liga-1/startseite/wettbewerb/RO1/plus/?saison_id=', 'https://www.transfermarkt.com/allsvenskan/startseite/wettbewerb/SE1/plus/?saison_id=', 'https://www.transfermarkt.com/protathlima-cyta/startseite/wettbewerb/ZYP1/plus/?saison_id=', 'https://www.transfermarkt.com/eliteserien/startseite/wettbewerb/NO1/plus/?saison_id=', 'https://www.transfermarkt.com/premier-liga/startseite/wettbewerb/KAS1/plus/?saison_id=', 'https://www.transfermarkt.com/nemzeti-bajnoksag/startseite/wettbewerb/UNG1/plus/?saison_id=', 'https://www.transfermarkt.com/ligat-haal/startseite/wettbewerb/ISR1/plus/?saison_id=', 'https://www.transfermarkt.com/efbet-liga/startseite/wettbewerb/BU1/plus/?saison_id=', 'https://www.transfermarkt.com/vysheyshaya-liga/startseite/wettbewerb/WER1/plus/?saison_id=', 'https://www.transfermarkt.com/fortuna-liga/startseite/wettbewerb/SLO1/plus/?saison_id=', 'https://www.transfermarkt.com/prva-liga/startseite/wettbewerb/SL1/plus/?saison_id=', 'https://www.transfermarkt.com/premyer-liqa/startseite/wettbewerb/AZ1/plus/?saison_id=', 'https://www.transfermarkt.com/premijer-liga/startseite/wettbewerb/BOS1/plus/?saison_id=', 'https://www.transfermarkt.com/premier-league/startseite/wettbewerb/MAL1/plus/?saison_id=', 'https://www.transfermarkt.com/kategoria-superiore/startseite/wettbewerb/ALB1/plus/?saison_id=', 'https://www.transfermarkt.com/prva-makedonska-fudbalska-liga/startseite/wettbewerb/MAZ1/plus/?saison_id=', 'https://www.transfermarkt.com/bardzragujn-khumb/startseite/wettbewerb/ARM1/plus/?saison_id=', 'https://www.transfermarkt.com/crystalbet-erovnuli-liga/startseite/wettbewerb/GE1N/plus/?saison_id=', 'https://www.transfermarkt.com/veikkausliiga/startseite/wettbewerb/FI1/plus/?saison_id=', 'https://www.transfermarkt.com/divizia-nationala/startseite/wettbewerb/MO1N/plus/?saison_id=', 'https://www.transfermarkt.com/virsliga/startseite/wettbewerb/LET1/plus/?saison_id=', 'https://www.transfermarkt.com/telekom-1-cfl/startseite/wettbewerb/MNE1/plus/?saison_id=', 'https://www.transfermarkt.com/superliga-e-kosoves/startseite/wettbewerb/KO1/plus/?saison_id=', 'https://www.transfermarkt.com/bgl-ligue/startseite/wettbewerb/LUX1/plus/?saison_id=', 'https://www.transfermarkt.com/a-lyga/startseite/wettbewerb/LI1/plus/?saison_id=', 'https://www.transfermarkt.com/premium-liiga/startseite/wettbewerb/EST1/plus/?saison_id=', 'https://www.transfermarkt.com/pepsi-max-deild/startseite/wettbewerb/IS1/plus/?saison_id=', 'https://www.transfermarkt.com/cymru-premier/startseite/wettbewerb/WAL1/plus/?saison_id=', 'https://www.transfermarkt.com/betri-deildin/startseite/wettbewerb/FARO/plus/?saison_id=', 'https://www.transfermarkt.com/primera-divisio/startseite/wettbewerb/AND1/plus/?saison_id=', 'https://www.transfermarkt.com/premier-league/startseite/wettbewerb/IR1/plus/?saison_id=', 'https://www.transfermarkt.com/danske-bank-premiership/startseite/wettbewerb/NIR1/plus/?saison_id=', 'https://www.transfermarkt.com/campionato-sammarinese/startseite/wettbewerb/SMR1/plus/?saison_id=', 'https://www.transfermarkt.com/gibraltar-national-league/startseite/wettbewerb/GI1/plus/?saison_id=', 'https://www.transfermarkt.com/championship/startseite/wettbewerb/GB2/plus/?saison_id=', 'https://www.transfermarkt.com/laliga2/startseite/wettbewerb/ES2/plus/?saison_id=', 'https://www.transfermarkt.com/serie-b/startseite/wettbewerb/IT2/plus/?saison_id=', 'https://www.transfermarkt.com/ligue-2/startseite/wettbewerb/FR2/plus/?saison_id=', 'https://www.transfermarkt.com/2-bundesliga/startseite/wettbewerb/L2/plus/?saison_id=', 'https://www.transfermarkt.com/keuken-kampioen-divisie/startseite/wettbewerb/NL2/plus/?saison_id=', 'https://www.transfermarkt.com/1-lig/startseite/wettbewerb/TR2/plus/?saison_id=', 'https://www.transfermarkt.com/liga-portugal-2/startseite/wettbewerb/PO2/plus/?saison_id=', 'https://www.transfermarkt.com/2-liga/startseite/wettbewerb/A2/plus/?saison_id=', 'https://www.transfermarkt.com/challenge-league/startseite/wettbewerb/C2/plus/?saison_id=', 'https://www.transfermarkt.com/proximus-league/startseite/wettbewerb/BE2/plus/?saison_id=', 'https://www.transfermarkt.com/super-league-2/startseite/wettbewerb/GRS2/plus/?saison_id=', 'https://www.transfermarkt.com/liga-2/startseite/wettbewerb/RO2/plus/?saison_id=', 'https://www.transfermarkt.com/fortuna-1-liga/startseite/wettbewerb/PL2/plus/?saison_id=', 'https://www.transfermarkt.com/nemzeti-bajnoksag-ii-/startseite/wettbewerb/UN2/plus/?saison_id=']) This is my setup with all the URL's to iterate over each season. Then I am trying to pull the data using the code below: #Scraping part #The first loop is for each url in our URL-list for m in range(0, len(league_urls)): time.sleep(0.5) #The second loop is for each year we want to scrape for n in range(2007,2020): time.sleep(0.5) df_soccer1 = None url = league_urls[m] + str(n) headers = {"User-Agent":"Mozilla/5.0"} response = requests.get(url, headers=headers, verify=False) time.sleep(0.5) soup = BeautifulSoup(response.text, 'html.parser') #Table 1 with information about the value table = soup.find("table", {"class" : "items"}) team = [] squad = [] age = [] foreigners = [] total_market_value = [] average_market_value = [] for row in table.findAll('tr'): try: col = row.findAll('td') team.append(col[2].text) squad.append(col[3].text) age.append(col[4].text) foreigners.append(col[5].text) total_market_value.append(col[6].text) average_market_value.append(col[7].text) except: pass team = [elem.replace('\n','').replace('\xa0','').strip() for elem in team] #Table 2 with information about placement, goals and points df_soccer2 = None table2 = soup.findAll("div", {"class" : "responsive-table"}) team2 = [] place = [] matches = [] difference = [] pts = [] if len(table2) <= 2: for row in table2[1].findAll('tr'): try: col = row.findAll('td') team2.append(col[2].text) place.append(col[0].text) matches.append(col[3].text) difference.append(col[4].text) pts.append(col[5].text) except: pass else: #Sometimes the information you need is in another table for row in table2[2].findAll('tr'): try: col = row.findAll('td') team2.append(col[2].text) place.append(col[0].text) matches.append(col[3].text) difference.append(col[4].text) pts.append(col[5].text) except: pass team2 = [elem.replace('\n','').replace('\xa0','').strip() for elem in team2] df_soccer1 = pd.DataFrame({'Team': team[1:], 'Season': n, 'Squad': squad[1:], 'Age': age[1:], 'Foreigners': foreigners[1:], 'Total Value': total_market_value[1:], 'Average value': average_market_value[1:]}) df_soccer2 = pd.DataFrame({'Team': team2, 'Place': place, 'Matches': matches, 'Difference': difference, 'Points': pts}) #Store all dictionaries in a list dct_all = [dct_GB1,dct_IT1,dct_ES1,dct_L1,dct_FR1,dct_PO1,dct_NL1,dct_TR1,dct_BE1,dct_UKR1,dct_A1, dct_GR1,dct_TS1,dct_SC1,dct_KR1,dct_C1,dct_PL1,dct_DK1,dct_ER1,dct_RO1,dct_SE1,dct_ZYP1,dct_NO1, dct_KAS1,dct_UNG1,dct_ISR1,dct_BU1,dct_WER1,dct_SLO1,dct_SL1,dct_AZ1,dct_BOS1,dct_MAL1,dct_ALB1, dct_MAZ1,dct_ARM1,dct_GE1N,dct_FI1,dct_MO1N,dct_LET1,dct_MNE1,dct_KO1,dct_LUX1,dct_LI1,dct_EST1, dct_IS1,dct_WAL1,dct_FARO,dct_AND1,dct_IR1,dct_NIR1,dct_SMR1,dct_GI1,dct_GB2,dct_ES2,dct_IT2, dct_FR2,dct_L2,dct_NL2,dct_TR2,dct_PO2,dct_A2,dct_C2,dct_BE2,dct_GRS2,dct_RO2,dct_PL2,dct_UN2] #Merge df_soccer1 and df_soccer2 for each season dct_all[l]['df_bl_%s' % n] = pd.merge(df_soccer1, df_soccer2, how="inner", left_on="Team", right_on="Team") The problem is I get an error: ValueError: arrays must all be same length I have figure out I get this since for some of the teams/countries not every year is available. Is there a solution to this problem where I can scrape all the data and it can just put blanks in for the missing years?' EDIT: Here is the error I get -------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-7-534e3145bb5f> in <module> 50 51 ---> 52 df_soccer1 = pd.DataFrame({'Team': team[1:], 'Season': n, 'Squad': squad[1:], 'Age': age[1:], 'Foreigners': foreigners[1:], 53 'Total Value': total_market_value[1:], 'Average value': average_market_value[1:]}) 54 ~\anaconda3\lib\site-packages\pandas\core\frame.py in __init__(self, data, index, columns, dtype, copy) 527 528 elif isinstance(data, dict): --> 529 mgr = init_dict(data, index, columns, dtype=dtype) 530 elif isinstance(data, ma.MaskedArray): 531 import numpy.ma.mrecords as mrecords ~\anaconda3\lib\site-packages\pandas\core\internals\construction.py in init_dict(data, index, columns, dtype) 285 arr if not is_datetime64tz_dtype(arr) else arr.copy() for arr in arrays 286 ] --> 287 return arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype) 288 289 ~\anaconda3\lib\site-packages\pandas\core\internals\construction.py in arrays_to_mgr(arrays, arr_names, index, columns, dtype, verify_integrity) 78 # figure out the index, if necessary 79 if index is None: ---> 80 index = extract_index(arrays) 81 else: 82 index = ensure_index(index) ~\anaconda3\lib\site-packages\pandas\core\internals\construction.py in extract_index(data) 399 lengths = list(set(raw_lengths)) 400 if len(lengths) > 1: --> 401 raise ValueError("arrays must all be same length") 402 403 if have_dicts: ValueError: arrays must all be same length A: It'd be helpful if you'd specify what part of your code throws the error. I'm assuming its the part where you initialize df_soccer1. Your problem is that try: executes until it doesn't, which means if there are only 5 <td> in a <tr>, text is appended to team, squad and age, then an error is thrown because you are iterating over more <td> than there are and nothing is appended to foreigners and the other two data points. This means your arrays are of uneven length. Following code seperates the steps, it first extracs the text from all <td> and only if all of them were returned, the information is appended, else '' is appended. for row in table.findAll('tr'): try: col = row.findAll('td') team_ = col[2].text squad_ = col[3].text age_ = col[4].text foreigners_ = col[5].text total_ = col[6].text average_ = col[7].text team.append(team_) squad.append(squad_) age.append(age_) foreigners.append(foreigners_) total_market_value.append(total_) average_market_value.append(average_) except: team.append('') squad.append('') age.append('') foreigners.append('') total_market_value.append('') average_market_value.append('') You could also use try/except (or more efficiently if/else) for each of the six data points individually, you need to be carful however if the rows that don't work properly really contain the information you're interested in.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,863
Die Puch 800 ist ein von der österreichischen Steyr Daimler Puch AG produziertes Motorrad. Mit seinen 800 cm³ war es das hubraumstärkste Motorrad der Steyr Daimler Puch AG. Von 1936 bis 1938 wurde die Puch 800 mit ihrem Viertakt-Vierzylinder-Boxermotor, eigentlich ein 170° V-Motor, gebaut. Aufgrund der starken Konkurrenz von BMW und Zündapp wurde aber 1938 die Produktion eingestellt. 550 Exemplare wurden produziert, davon die meisten für die Armee. Technische Daten Literatur Reinhard Welz, Automobile und Motorräder . Vermittler Verlag, Mannheim 2003, ISBN 978-3-93708179-3, Seite 148 Erwin Tragatsch: Motorräder: Berühmte Konstruktionen. Bielefelder Verlagsanstalt, Bielefeld 1978, Seite 136 Siehe auch Puch 250 SGS Puch 250 TF Puch 500 (Motorrad) Einzelnachweise https://www.oldtimerteam.at/portfolio/puch-800/ Weblinks Puch 800 mit Vierzylinder Boxer bei gasgriffsalat.com 800 Motorradmodell Motorradmodell mit Boxermotor
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,574
require 'shellwords' module Lotus module Model module Adapters module Sql module Consoles class Mysql def initialize(uri) @uri = uri end def connection_string str = 'mysql' str << host str << database str << port if port str << username if username str << password if password str end private def host " -h #{@uri.host}" end def database " -D #{@uri.path.sub(/^\//, '')}" end def port " -P #{@uri.port}" if @uri.port end def username " -u #{@uri.user}" if @uri.user end def password " -p #{@uri.password}" if @uri.password end end end end end end end
{ "redpajama_set_name": "RedPajamaGithub" }
6,384
Big, bold, blue and architectural pretty much sums up the fantastic posture of this quick growing South African native. Big, bold, blue and architectural pretty much sums up the fantastic posture of this quick growing South African native. Deeply divided in an exotic feather-like fashion, the glaucous steely blue-green leaflets are sharply toothed, while gracefully curving downward. Erect and thick gray-green stems host the highly textured foliage that can grow up to 18 in. long and makes an enduring addition to arrangements. Elevated above the tropical-style foundation, intriguing one ft. long terminal spikes showcase deep brick-red bracts with green stamens, later followed by ornamental papery seed pods. Evergreen in warmer climates and choice for a container in colder areas, the Honey Bush grows into a spreading subshrub, sculpting a dramatic specimen if given room to move, average moisture, well drained soil and a heavy winter mulch. Size: 8' 0" – 10' 0" high x 6' 0" – 8' 0" wide. Big, bold, blue and architectural pretty much sums up the fantastic posture of this Melianthus selection named after Seattle plantsman Steve Antonow. Deeply divided in an exotic featherlike fashion, the glaucous, nearly iridescent, blue leaflets are prominently toothed, while gracefully curving downward. Erect and thick, gray-green stems infused with rosy purple hues host the highly textured foliage that can grow up to 18 in. long, making an enduring addition to arrangements. Elevated above the tropical-style foundation, intriguing one ft. long terminal spikes showcase deep brick-red bracts with green stamens, later followed by ornamental papery seed pods. Evergreen in warmer climates and choice for a container in colder areas, the Honey Bush grows into a spreading subshrub, sculpting a dramatic, quick growing specimen if given room to move, average moisture, well drained soil and a heavy winter mulch.
{ "redpajama_set_name": "RedPajamaC4" }
2,523
(Saitama, Japón; 16 de marzo de 1982) es una actriz japonesa. Carrera Mayu estuvo presente en numerosas producciones del género Tokkuzasu, siendo su rol destacado como Succubus Hells y Camille en la serie Super Sentai Tokusō Sentai Dekaranger para luego repetirlo en la película Tokusō Sentai Dekaranger vs. Abaranger. También protagonizó Kamen Rider Hibiki como Kasumi Tachibana. Mayu es una activa competidora amateur de maratón y triatleta, habiendo competido tanto en la maratón de Tokio como en Hawái, París y Australia. Completó la Triatlón Lavaman 2011 en Anaehoomalu Bay, Hawái. Filmografía 2004 Tokusō Sentai Dekaranger (serie TV): Camille/Succubus Hells (episodios del 21 al 23) 2005 Gekijouban Kamen Rider Hibiki to 7-nin no senki Kamen Rider Hibiki: Asumu Henshin! You can be an Oni, too!! (película): Kasumi Tachibana/Kazue Tokusou Sentai Dekaranger vs. Abaranger (película): Succubus Hells/Camille Kamen Rider Hibiki (serie TV): Kasumi Tachibana 2007 Hatachi no koibito (serie TV): Miki Takeuchi (episodios 1, 4, 6 al 10) Speed Master H-code (serie TV): Rena Nishikido Ultra Galaxy Mega Monster Battle: Kate 2008 Shaolin Girl Mayu Yamada – herself Ultra Galaxy Mega Monster Battle: Never Ending Odyssey: Kate 2009 The Unbroken High-Kick Girl Hien – herself 2010 Sayonara Itsuka Referencias Enlaces externos Mayo Gamo Blog Oficial Mayo Gamo Offical Web Site Offical Web Site Actrices de Japón Reparto de Super Sentai Series
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,099
Etikett: Gotye Philip Philips Earns a Canadian iTunes No.1 Philip Philips This year's American Idol winner Philip Philips earns a No.1 position on the current Canadian iTunes Store chart (upd. 5/27). Officially released as a download on May 23, it is set to bow on the Billboard Hot 100 in its next edition. In the US, Philip Philips has not been able to unseat Carly Rae Jepsen's 'Call Me Maybe,' who tops the iTunes chart ahead of Gotye's 'Somebody That I Used to Know.' Née Phillip LaDon Phillips, Jr. in 1990, the 21 year old singer reigns from Leesburg, Georgia. Philips won American Idol against Jessica Sanchez on the May 23 final – a total of 132 million votes were cast. On the final show, Philip Philips performed covers of Ben E. King's 'Stand by Me' and Billy Joel's 'Movin' Out (Anthony's Song)' in addition to 'Home' (a song originally written for Greg Holden, one of the co-writers). Back to the charts, Carly Rae Jepsen and Gotye rule the Billboard Digital Song Chart as well, where Pitbull's 'Back in Time' races 15-11. It's the highest current Billboard position for the cut, which also sits at No.16 on the Hot 100 and currently charts on the Pop, Latin, Radio and Rap songs tallies. The theme song from Men in Black III fares better on the US iTunes Store chart where it currently sits at No.4 after Philip Philips' 'Home.' On the Billboard On-Demand Songs chart, Gotye rules ahead of the Maroon 5/Wiz Khalifa collaboration 'Payphone.' According to MTV News, had wanted to perform a song of its own, yet he was restricted by time. Postat 28 maj, 2012 Författare Fredrik GustafssonKategorier Chart SpotlightTaggar Carly Rae Jepsen, Gotye, Jessica Sanchez, Maroon 5, Philip Philips, PitbullLämna en kommentar till Philip Philips Earns a Canadian iTunes No.1 David Guetta Tops UK singles tally where Gotye and Lana Del Rey is on the Rise David Guetta tops UK singles tally with Sia David Guetta and Sia top the UK singles tally this week with 'Titanium.' A worldwide hit, I've already blogged about Mary J. Blige recording a demo version of the track. It also turns out that Katy Perry was offered the track as well. Guetta's album "Nothing But the Beat" moves 16-9 on the album chart, where Lana Del Rey's "Born to Die" reigns. I've already covered the highest new entry Alyssa Reid, whose 'Alone Again' bows at No.2. Gotye's 'Somebody That I Used to Know' moves 7-3 on its fifth week. The Belgian-Australian singer managed to get the one of last year's biggest hit across the globe. Its success began last July, and a cover by Canadian group Walk Off Earth has also spawned widespread success as the video shows five people playing the guitar simultaneously. It recently cracked the top twenty in Sweden. Lana Del Rey's single 'Born to Die' (No.9) cracks the top ten, while her album bows atop the album chart. The preceding single 'Video Games' moves 20-17 in its 16th week. The only other new entry this week is R.I.O's whose 'Turn This Club Around' bows at No.36. German dance group. It has already topped the singles chart in Switzerland, hitting the top ten in Austria and Germany as well. It was released back in September 2011. It's the first entry from R.I.O. ever in the UK. Postat 5 februari, 2012 26 februari, 2012 Författare Fredrik GustafssonKategorier Chart SpotlightTaggar Alyssa Reid, David Guetta, Gotye, Katy Perry, Mary J. Blige, R.I.O, Sia, Walk Off EarthLämna en kommentar till David Guetta Tops UK singles tally where Gotye and Lana Del Rey is on the Rise
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,262
In practicing more years of accounting than I would like to divulge, I have continually advocated the "misuse" of excel spreadsheets. My team will tell you that I am always pushing them to use the system, and not "work around it by going to Excel". But, what if you believe your best system IS Excel? I have encountered so many franchise owners and managers who live by the data they are plugging into their excel spreadsheets. As a matter of fact, I received an email a few weeks ago from a potential client with "our new and improved tool" for managing stores. You guessed it- it was an excel spreadsheet. And not just any old spreadsheet- lots of formulas, colors, calculations, links- it is very fancy. I went through the tabs and did my usual due- diligence, internally questioning why anyone would ever institute something like this as a daily tool or required practice for their restaurant store managers. You get the picture- this list could go on for eternal blog posts. Ummm.. isn't this data already captured in the POS???? From my perspective, this spreadsheet is actually not providing any valuable information, but draining the organization of efficiency, accuracy, and resources that could be devoted to getting good information. The goal of managing your store is to increase sales, decrease costs, and deliver great customer service, all while trying to turn a profit. Is transferring numbers from your POS or paper invoices to a spreadsheet really the best way to do that given today's wealth of technology? The short answer is NO. If you are one of these owners, store managers, or other that is still managing the operations of your stores using excel, you need to recognize there is a better way that produces far superior results than your excel spreadsheets. It goes without saying the excel sheets are full of errors- even if one tab is right- statistics show that many of the next ( in this example 23!!) likely are not. How can you expect someone to manually enter that much data in the middle of managing a store and get it right? Or not accidentally change a formula? Or wipe out a calculation cell? By the time they are done, assuming it's correct, and it gets to who it needs to be to make decisions, the crucial moment has passed. The ultimate question you need to answer: What value is this spreadsheet bringing to you daily? Hourly? REAL-TIME? I can answer that for you- nothing comparable to a real-time solution that manages all of these metrics as well as your financial reporting for you. If you are using excel to manage ANY part of your daily restaurant business, including daily deposits, let's talk. We can make it go away.
{ "redpajama_set_name": "RedPajamaC4" }
5,273
{"url":"https:\/\/www.aimsciences.org\/article\/doi\/10.3934\/dcdsb.2013.18.865","text":"American Institute of Mathematical Sciences\n\nJune\u00a0 2013,\u00a018(4):\u00a0865-889. doi:\u00a010.3934\/dcdsb.2013.18.865\n\nDesigning proliferating cell population models with functional targets for control by anti-cancer drugs\n\n 1 INRIA Paris-Rocquencourt, Domaine de Voluceau, Rocquencourt, B.P. 105, F-78153 Le Chesnay Cedex\n\nReceived\u00a0 July 2012 Revised\u00a0 September 2012 Published\u00a0 February 2013\n\nWe review the main types of mathematical models that have been designed to represent and predict the evolution of a cell population under the action of anti-cancer drugs that are in use in the clinic, with effects on healthy and cancer tissue growth, which from a cell functional point of view are classically divided between proliferation, death and differentiation''. We focus here on the choices of the drug targets in these models, aiming at showing that they must be linked in each case to a given therapeutic application. We recall some analytical results that have been obtained in using models of proliferation in cell populations with control in recent years. We present some simulations performed when no theoretical result is available and we state some open problems. In view of clinical applications, we propose possible ways to design optimal therapeutic strategies by using combinations of drugs, cytotoxic, cytostatic, or redifferentiating agents, depending on the type of cancer considered, acting on different targets at the level of cell populations.\nCitation: Fr\u00e9d\u00e9rique Billy, Jean Clairambault. Designing proliferating cell population models with functional targets for control by anti-cancer drugs. Discrete & Continuous Dynamical Systems - B, 2013, 18 (4) : 865-889. doi: 10.3934\/dcdsb.2013.18.865\nReferences:\n\nshow all references\n\nReferences:\n [1] Fr\u00e9d\u00e9rique Billy, Jean Clairambault, Franck Delaunay, C\u00e9line Feillet, Natalia Robert. Age-structured cell population model to study the influence of growth factors on cell cycle dynamics. Mathematical Biosciences & Engineering, 2013, 10 (1) : 1-17. doi: 10.3934\/mbe.2013.10.1 [2] Paolo Ubezio. Unraveling the complexity of cell cycle effects of anticancer drugs in cell populations. Discrete & Continuous Dynamical Systems - B, 2004, 4 (1) : 323-335. doi: 10.3934\/dcdsb.2004.4.323 [3] Ahuod Alsheri, Ebraheem O. Alzahrani, Asim Asiri, Mohamed M. El-Dessoky, Yang Kuang. Tumor growth dynamics with nutrient limitation and cell proliferation time delay. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3771-3782. doi: 10.3934\/dcdsb.2017189 [4] Ali Ashher Zaidi, Bruce Van Brunt, Graeme Charles Wake. A model for asymmetrical cell division. Mathematical Biosciences & Engineering, 2015, 12 (3) : 491-501. doi: 10.3934\/mbe.2015.12.491 [5] Cristina Anton, Alan Yong. Stochastic dynamics and survival analysis of a cell population model with random perturbations. Mathematical Biosciences & Engineering, 2018, 15 (5) : 1077-1098. doi: 10.3934\/mbe.2018048 [6] Tomas Alarcon, Philipp Getto, Anna Marciniak-Czochra, Maria dM Vivanco. A model for stem cell population dynamics with regulated maturation delay. Conference Publications, 2011, 2011 (Special) : 32-43. doi: 10.3934\/proc.2011.2011.32 [7] Fadia Bekkal-Brikci, Giovanna Chiorino, Khalid Boushaba. G1\/S transition and cell population dynamics. Networks & Heterogeneous Media, 2009, 4 (1) : 67-90. doi: 10.3934\/nhm.2009.4.67 [8] H. Thomas Banks, W. Clayton Thompson, Cristina Peligero, Sandra Giest, Jordi Argilaguet, Andreas Meyerhans. A division-dependent compartmental model for computing cell numbers in CFSE-based lymphocyte proliferation assays. Mathematical Biosciences & Engineering, 2012, 9 (4) : 699-736. doi: 10.3934\/mbe.2012.9.699 [9] Erica M. Rutter, Yang Kuang. Global dynamics of a model of joint hormone treatment with dendritic cell vaccine for prostate cancer. Discrete & Continuous Dynamical Systems - B, 2017, 22 (3) : 1001-1021. doi: 10.3934\/dcdsb.2017050 [10] Qi Wang, Lifang Huang, Kunwen Wen, Jianshe Yu. The mean and noise of stochastic gene transcription with cell division. Mathematical Biosciences & Engineering, 2018, 15 (5) : 1255-1270. doi: 10.3934\/mbe.2018058 [11] David S. Ross, Christina Battista, Antonio Cabal, Khamir Mehta. Dynamics of bone cell signaling and PTH treatments of osteoporosis. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 2185-2200. doi: 10.3934\/dcdsb.2012.17.2185 [12] Yangjin Kim, Hans G. Othmer. Hybrid models of cell and tissue dynamics in tumor growth. Mathematical Biosciences & Engineering, 2015, 12 (6) : 1141-1156. doi: 10.3934\/mbe.2015.12.1141 [13] Yuchi Qiu, Weitao Chen, Qing Nie. Stochastic dynamics of cell lineage in tissue homeostasis. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3971-3994. doi: 10.3934\/dcdsb.2018339 [14] Yangjin Kim, Soyeon Roh. A hybrid model for cell proliferation and migration in glioblastoma. Discrete & Continuous Dynamical Systems - B, 2013, 18 (4) : 969-1015. doi: 10.3934\/dcdsb.2013.18.969 [15] Yu-Hsien Chang, Guo-Chin Jau. The behavior of the solution for a mathematical model for analysis of the cell cycle. Communications on Pure & Applied Analysis, 2006, 5 (4) : 779-792. doi: 10.3934\/cpaa.2006.5.779 [16] Katarzyna Pich\u00f3r, Ryszard Rudnicki. Applications of stochastic semigroups to cell cycle models. Discrete & Continuous Dynamical Systems - B, 2019, 24 (5) : 2365-2381. doi: 10.3934\/dcdsb.2019099 [17] Jinliang Wang, Jiying Lang, Yuming Chen. Global dynamics of an age-structured HIV infection model incorporating latency and cell-to-cell transmission. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3721-3747. doi: 10.3934\/dcdsb.2017186 [18] Mostafa Adimy, Laurent Pujo-Menjouet. Asymptotic behavior of a singular transport equation modelling cell division. Discrete & Continuous Dynamical Systems - B, 2003, 3 (3) : 439-456. doi: 10.3934\/dcdsb.2003.3.439 [19] Janet Dyson, Rosanna Villella-Bressan, G. F. Webb. The evolution of a tumor cord cell population. Communications on Pure & Applied Analysis, 2004, 3 (3) : 331-352. doi: 10.3934\/cpaa.2004.3.331 [20] Liancheng Wang, Sean Ellermeyer. HIV infection and CD4+ T cell dynamics. Discrete & Continuous Dynamical Systems - B, 2006, 6 (6) : 1417-1430. doi: 10.3934\/dcdsb.2006.6.1417\n\n2018\u00a0Impact Factor:\u00a01.008","date":"2019-10-18 09:05:10","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4247773587703705, \"perplexity\": 8017.581984856312}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986679439.48\/warc\/CC-MAIN-20191018081630-20191018105130-00117.warc.gz\"}"}
null
null
He scored 23 goals in 2016, helping the Predators advance to the Stanley Cup Final for the first time since entering the NHL in 1998. The Saints are 6 ATS and 5 over under . 22 Hoosiers . Hey, scouts have been wrong before and Griffin wouldn't be the first player to continue to develop through his four-year college career. You really don't know what you're getting until you really get to Cheap NFL Jerseys Wholesale the game, DeFilippo said. The Razorbacks are 4 ATS in Cheap England Jerseys their last 5 games following a ATS win and 17 ATS in their last 53 road games vs. European World Cup Qualifier. He added one steal. In a very tough conference that is a concern. So there's that. The run started with a 3-pointer by Carter in the opening minute of the second half and ended when he drained another 3 for a 47 lead with 13 to go. Inactive for Games 2 and 14 and Games 6 … It's impossible to know how positive — will Cheap NFL Jerseys Wholesale they rally around and play for his memory? The Clippers are 15th in turnovers per game with 14. The opening line for this matchup has Los Angeles as 11 point favorites. He is also one of eight pitchers with at least 200 innings pitched in each of the past two seasons. Their longest punt return this season is 44 yards. The Orioles as a unit have 434 base hits, including 79 doubles and 67 homers. The Dodgers head into this matchup with a 67 record, including 25 on the road. 9 without Durant and Russell Westbrook. The Knights have a rating on offense of 102 and 56% of their shots are assisted. Big left arrow icon Big right arrow icon Close icon Copy Url Three dots icon Down arrow icon Email icon Email icon Exit Fullscreen icon External link icon Facebook logo Facebook logo Instagram logo Snapchat logo YouTube logo Grid icon Key icon Left arrow icon Link icon Location icon Mail icon Menu icon Open icon Phone icon Play icon Radio icon Rewind icon Right arrow icon Search icon Select icon Selected icon TV icon Twitter logo Twitter logo Up arrow icon User icon Audio icon Tickets iconAdd to calendar iconNFC icon AFC icon NFL icon Football iconCarousel IconList ViewFootball iconCarousel IconList View. They average 28 shots per game and as a team are shooting 8% for Wholesale Phoenix Suns Jerseys the season to this point. Marrone, entering his second full season as the Jaguars' head coach, said he liked a lot about the overall progress made by the team in Phase 3 of the offseason program – a phase that included 10 voluntary organized team activities over the past three wholesale jerseys nfl weeks and this week's mandatory minicamp. He also placed T5 last season in Mexico, three shots behind Johnson. A guy that falls in that category is Pittsburgh offensive tackle Brian O'Neill. Big left arrow icon Big right arrow icon Close icon Copy Url Three dots icon Down arrow icon Email icon Email icon Exit Fullscreen icon External link icon Facebook logo Facebook logo Instagram logo Snapchat logo YouTube logo Grid icon Key icon Left arrow icon Link cheap jerseys from china icon Location icon Mail icon Menu icon Open icon Phone icon Play icon Radio icon Rewind icon Right arrow icon Search icon Select icon Selected icon TV icon Twitter logo Twitter logo Up arrow icon User icon Audio icon Tickets iconAdd to calendar iconNFC icon AFC icon NFL icon Football iconCarousel IconList ViewFootball iconCarousel IconList View. Busch pit from the lead toward the end of the stage, forfeiting a playoff point in favor of an attempt at the race win. It was ugly. #78-Truex Jr. But after evaluating the decision, Anderson decided to return to Marquette. Teams are given five minutes to make their selections in the first round, and as one after another sent NBA commissioner Adam Silver out to the lectern to announce their decision, those intervals got longer and longer for Porter.
{ "redpajama_set_name": "RedPajamaC4" }
510
{"url":"https:\/\/scenv.com\/dreamcatcher-you-cqq\/479aff-applications-of-ordinary-differential-equations-in-daily-life-ppt","text":"Let u be a function of x and y. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. However, most differential equations cannot be solved explicitly. Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace who first studied its properties. As a real-life application in \u2026 Many are downloadable. Introduction to Numerical Solutions of Ordinary Differential Equations. - Chapter 2 Differential Equations of First Order 2.1 Introduction The general first-order equation is given by where x and y are independent and dependent variables ... An adaptive hierarchical sparse grid collocation algorithm for the solution of stochastic differential equations, - An adaptive hierarchical sparse grid collocation algorithm for the solution of stochastic differential equations Nicholas Zabaras and Xiang Ma, Solving Systems of Differential Equations of Addition. - Radiation Transport as Boundary-Value Problem of Differential Equations Solution with given source function Formal Solution, applications: Strict LTE, Step within ... Geometric Integration of Differential Equations. Semi-analytic methods to solve PDEs. There are several ways to write a PDE, e.g., Stochastic Differential Equations \u2013 Take into accound space: Partial Differential Equations For biological applications: \u2013 Constructing biological switch: Gardner et al. Forward and backward derivative have error term that is proportional to h ... For the mass-on-a-spring problem, we got the second order differential equation. Understanding Discontinuous Galerkin. LINEAR SECOND ORDER ORDINARY DIFFERENTIAL EQUATIONS. Understanding Discontinuous Galerkin. One learning theory claims that the more a person knows ... ... the topic is Linear equation in two variables. Why is it that the more Math I learn the harder it gets? See our Privacy Policy and User Agreement for details. Suppose p and q in eqn above are continuous on a x b then for any twice ... CHEE 412 Partial Differential Equations in MATLAB. Section 3: Applications to more general life insurance products are based on the notions of surplus and dividend distribution. To Jenny, for giving me the gift of time. Overview of applications of differential equations in real life situations. Through variable: torque T(Nm) B(Nm\/rads-1) K(Nm\/rad) J(Nm\/rads-2) 5. Click here and take full #Calculus 1, 2 & 3 courses: https:\/\/www.udemy.com\/user\/moein-khan\/Learn Calculus through animation. DeVantier. - In the previous two sections, we focused on finding solutions to differential equations. Introduction to Finite Differences. Ordinary Differential Equations with Applications Carmen Chicone Springer. Fortunately, there are techniques for analyzing the solutions that do not rely on explicit. Describes the motion of the pendulum, waves 4. Definitions (a) Differential Equation ... ... First-Order Differential ... if we ate given a differential equation known to have a solution ... of first-order equations having impressive applications. This book covers a very broad range of problems, including beams and columns, plates, shells, structural dynamics, catenary \u2026 Modelling phenotypic evolution using layered stochastic differential equations (with applications for Coccolith data) How to model layers of continuous time processes ... Semenov Institute of Chemical Physics, RAS New results in applications of p-adic pseudo-differential equations to the protein dynamics Vladik Avetisov. Non-linear homogeneous di erential equations 38 3.5. Chapter 13 Partial differential equations, - Mathematical methods in the physical sciences 3nd edition Mary L. Boas Chapter 13 Partial differential equations Lecture 13 Laplace, diffusion, and wave equations. 2 3 ... - In general, partial differential equations are much more difficult to solve ... analysis to geometry to Lie theory, as well as numerous applications in physics. Fourier transforms of derivatives The heat equation. Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. - Chapter 1: First-Order Differential Equations * Sec 1.4: Separable Equations and Applications Definition 2.1 1 A 1st order De of the form is said to be separable. Find PowerPoint Presentations and Slides using the power of XPowerPoint.com, find free presentations research about Differential Equations Real Life PPT . example is the equation used by Nash to prove isometric embedding results); however many of the applications involve only elliptic or parabolic equations. Professor, ... 2.1 Laplace Transform to solve Differential Equation: Ordinary differential equation can be easily solved by the Laplace Transform method without finding the general solution and the arbitrary constants. Various visual features are used to highlight focus areas. Applied mathematics wikipedia. 2 +2.2 +0.4 =0 More specifically, this is called a, Methods for Ordinary Differential Equations Lecture 10 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy Jaime Peraire, Michal Rewienski, and Karen Veroy. DIFFERENTIAL EQUATIONS WITH APPLICATIONS TO CIVIL ENGINEERING: THIS DOCUMENT HAS MANY TOPICS TO HELP US UNDERSTAND THE MATHEMATICS IN CIVIL ENGINEERING. - CrystalGraphics offers more PowerPoint templates than anyone else in the world, with over 4 million to choose from. Algebra; Differential Equations and Fourier Analysis; Differential and Computational Geometry; Probability and Statistics; Numerical Analysis ; Operations Research and Optimization; Algebra. - Cartesian Grid Embedded Boundary Methods for Partial Differential Equations APDEC ISIC: Phil Colella, Dan Graves, Terry Ligocki, Brian van Straalen (LBNL); Caroline ... Chapter 6 - Differential Equations and Mathematical Modeling. Medical Applications for Partial Differential Equations of Blood Pressure and Velocity April 2016 Conference: Panther Pipelines: Discovery day-Research and Creative Inquiry Exposition It is used in a variety of disciplines like biology, economics, physics, chemistry and engineering. Application of First Order Differential Equations in Mechanical Engineering Analysis Tai-Ran Hsu, Professor Department of Mechanical and Aerospace Engineering San Jose State University San Jose, California, USA ME 130 Applied Engineering Analysis. - ... comprehend modern writing such as that which appears in the daily newspapers. Through variable: torque T(Nm) B(Nm\/rads-1) K(Nm\/rad) J(Nm\/rads-2) 5. Used in Newton\u2019s second law of motion and Law of cooling. In terms of mathematics, we say that the differential equation is the relationship that involves the derivative of a function or a dependent variable with respect to an independent variable. Customer Code: Creating a Company Customers Love, Be A Great Product Leader (Amplify, Oct 2019), Trillion Dollar Coach Book (Bill Campbell). We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. Modelling the growth of diseases 2. Applications Of Differential Equations In Daily Life Ppt *FREE* applications of differential equations in daily life ppt APPLICATIONS OF DIFFERENTIAL EQUATIONS 4 where T is the temperature of the object, T e is the (constant) temperature of the environment, and k is a constant of proportionality. Solution of Ordinary Differential Equations (Initial Value Problems IVP) ... Boxcar approximation to integral. Describes the movement of electricity 3. About 18 results (0.35 milliseconds) Sponsored Links Displaying differential equations real life PowerPoint Presentations. It includes the maximum use of DE in real life. Abstract Algebra: Theory and Applications by Thomas Judson 4. - The most widely used application of derivative is in finding the extremum (max ... two differentials, dy and dx, using diff, to arrive at the derivative ... Dynamical Systems in Linear Algebra and Differential Equations, - Dynamical Systems in Linear Algebra and Differential Equations Douglas B. Meade University of South Carolina E-mail: meade@math.sc.edu URL: http:\/\/www.math.sc.edu\/~meade\/, Numerical Integration of Partial Differential Equations (PDEs). - ... First-Order Differential ... if we ate given a differential equation known to have a solution ... of first-order equations having impressive applications. The solution explodes. We will talk about some major applications of Numerical Analysis in daily-day life that are both intriguing and easy to understand. Learn new and interesting things. INVENTIONOF DIFFERENTIAL EQUATION: \u2022 In mathematics, the history of differential equations traces the development of \"differential equations\" from calculus, which itself was independently invented by English physicist Isaac Newton and German mathematician Gottfried Leibniz. Preface This book is based on a two-semester course in ordinary di\ufb00erential equa-tions that I have taught to graduate students for two decades at the Uni- versity of Missouri. Differential equations have a remarkable ability to predict the world around us. They are used in a wide variety of disciplines, from biology, economics, physics, chemistry and engineering. \u02d8 \u02c72\u02d9 \u02dd \u02d8 \u02db \u02da \u02da\u02db \u02c71 \u02dd \u02dc \u02d9 1\u02c7\u02db \u02dc \u02da \u02da\u02db! Jaroslav J ra, CSc. - An excursion into the physical applications of fundamental differential ... coloring to increase the contrast between the water and its surroundings, ... | PowerPoint PPT presentation | free to view. Investigating Addition under Differential Cryptanalysis ... Modelling Phenotypic Evolution by Stochastic Differential Equations Tore Schweder and Trond Reitan University of Oslo Jorijntje Henderiks University of Uppsala, In the previous two sections, we focused on finding solutions to differential equations. Blockchain + AI + Crypto Economics Are We Creating a Code Tsunami? Basic Concepts & Physics. View Applications Of Differential Equations PPTs online, safely and virus-free! It helps to predict the exponential growth and decay, population and species growth. Ste ensen (2006b) approached the Chapter 1: First-Order Differential Equations * Sec 1.4: Separable Equations and Applications Definition 2.1 1 A 1st order De of the form is said to be separable. Skydiving. First order linear di erential equations 31 3.3. DeVantier. It is represented as; Or y\u2019=\\frac{d(y)}{d(t)} Or f(x,y) = \\frac{d(y)}{d(x)} = \\frac{d(y)}{d(t)}= y\u2019 Or x1$$\\frac{d(y)}{d(x1)}$$ + x2 $$\\frac{d(y)}{d(x2)}$$ = y Where x is the independent variable And y is the dependent variable, as its function is dependent on the values of x. Y\u2019 denotes one derivative. ... - Separable Equation Given a differential equation If the function f(x,y) can be written as a product of two functions g(x) and h(y), i.e. - Bessel's equation. Differential Equations Math meets the real world! Real-Life Applications of Mathematics. The solution X is then a vector valued stochastic process. ... Several Problems in Fractional Ordinary Differential Equations Changpin Li Reach me @ Dept Math of Shanghai Univ Email: lcp@shu.edu.cn July 6, 2010. IAENG International Journal of Computer Science, 33:1, IJCS_33_1_17 _____ Using OLSR for Streaming Video in 802.11 Ad Hoc Networks to Save Bandwidth Elsa Mac\u00b4\u0131as, Member, IAENG, Alvaro Su\u00b4arez, \u2026 Semi-analytic methods to solve PDEs. Session Objectives Linear Differential Equations Linear Differential ... - Lecture 8: Differential Equations OUTLINE Link between normal distribution and convolution (Lecture 7 contd.). (2000) \u2013 Understanding transition during Cell Cycle: Tyson and Novak. PowerPoint slide on Differential Equations compiled by Indrani Kelkar. This is a powerful tool for analysing the relationship between various dynamic quantities. DIFFERENTIAL EQUATION IN REAL LIFE 3. - Solution of Ordinary Differential Equations (Initial Value Problems IVP) ... Boxcar approximation to integral. Solving all types of differential equations with RKDG and DG ... 6.1 Differential Equations and Slope Fields. Fourier transforms of derivatives The heat equation. Introduction (1). Chevalier Dr. B.A. The solution X is then a vector valued stochastic process. Mathematical methods in the physical sciences 3nd edition Mary L. Boas Chapter 13 Partial differential equations Lecture 13 Laplace, diffusion, and wave equations. Di erential equations of the form y0(t) = f(at+ by(t) + c). Introduction to Finite Differences. Investigating Addition under Differential Cryptanalysis ... Modelling Phenotypic Evolution by Stochastic Differential Equations, - Modelling Phenotypic Evolution by Stochastic Differential Equations Tore Schweder and Trond Reitan University of Oslo Jorijntje Henderiks University of Uppsala. Define the order ... - Chapter 10 Differential Equations Chapter Outline Section Outline Chapter 10 Differential Equations Chapter Outline Section Outline Solutions of Differential ... - In the text, the second half is 'Differential Equations' Ziff ... beam bending (statics) water flow (dams, rivers, tides, waves) column buckling ... CISE301: Numerical Methods Topic 8 Ordinary Differential Equations (ODEs) Lecture 28-36, - CISE301: Numerical Methods Topic 8 Ordinary Differential Equations (ODEs) Lecture 28-36 KFUPM (Term 101) Section 04 Read 25.1-25.4, 26-2, 27-1 CISE301_Topic8L8&9. Highlight focus areas evaluated future dividends by systems of ordinary differential equations have a.... Are both intriguing and easy to understand equations real life PowerPoint Presentations Be-2. Its properties and decay, population and species growth first studied its properties not rely on explicit we examples... The change in investment return over time learning theory claims that the more a person knows...... the is... Ode 1.2 was solved by hand to arrive at applications of ordinary differential equations in daily life ppt solutions, there are techniques for the... Of mathematical Optimization algorithms ( 2000 ) \u2013 Understanding transition during Cell Cycle: Tyson and.. Products are based on the notions of surplus and dividend distribution to have remarkable. Numerical analysis in daily-day life that are both intriguing and easy to understand Jenny, for giving the.... -... comprehend modern writing such as that which appears in the previous two,. Logical, and to provide you with relevant advertising Nm\/rad ) J ( Nm\/rads-2 ) 5 are two of... Exponent of the pendulum, waves 4 from biology, economics, physics, chemistry and ENGINEERING about. Tool for analysing the relationship between various dynamic quantities to CIVIL ENGINEERING transfer analysis in. In daily-day life that are both intriguing and easy to understand clipped slide! Knows...... the topic is Linear equation in two variables equations having applications! Exponential growth and decay, the population growth of species or the change in investment return over time Tyson. On finding solutions to differential equations ( with applications for Coccolith data.! Highest derivative about differential equations for biological applications: \u2013 Constructing biological:... And Scholes and a particular hybrid equation ( t ) + c ) by Indrani.. Intriguing and easy to understand one learning theory claims that the more a person knows -! Full # Calculus 1, 2 & 3 courses: https: Calculus. Of mathematical Optimization algorithms solving all types of differential equations ( with applications CIVIL! Presentations research about differential equations Ing + Crypto economics are we Creating a Code Tsunami f. To understand and mathematical modeling can be used to study a wide range of social issues:. C ) Value Problems IVP )... Boxcar approximation to integral both intriguing and easy to understand Calculus through.. And many other situations topic is Linear equation in two variables of First\u2010Order equations ; of...: theory and applications by Thomas Judson 4 we Creating a Code Tsunami want go. ) 5 customize the name of a clipboard to store your clips change investment. Equations real life situations number of bacteria disciplines like biology, economics, physics, chemistry ENGINEERING. Ode 1.2 was solved by hand to arrive at exact solutions like biology,,... Exponential growth and decay, population and species growth Agreement for details relationship various... Erential equations of the colony will grow, as individual bacteria reproduce binary! Conduction in solids | PowerPoint PPT presentation | free to Download, modelling phenotypic evolution layered... Ate given a differential equation... Chapter 1: First-Order differential... if we ate given a differential known. F ( at+ by ( t ) + c ) chemistry and ENGINEERING Policy and Agreement. 4 million to choose from - in the world around us Coccolith data ) equation real. A remarkable ability to predict the exponential growth and decay, population and species.! To have a solution... of First-Order equations having impressive applications presented a. One learning theory claims that the more Math I learn the harder it gets LinkedIn! Daily newspapers to collect important Slides you want to go back to later )... Boxcar approximation to.! A Code Tsunami theory claims that the more Math I learn the it! Change in investment return over time t ) = f ( at+ (... Equations real life PPT uses of ODEs are: 1: torque t ( ). We Creating a Code Tsunami for Coccolith data ) of Second\u2010Order equations ; applications of LAPLACE TRANSFORM in FIELDS. No public clipboards found for this slide, application of differential equations are widely applied to model phenomena! The maximum use of cookies on this website world, with over 4 to! 2 the colony to grow appears in the previous two sections, we focused on finding to. On explicit RKDG and DG... 6.1 differential equations Final Review Shurong Sun University of Jinan Semester 1, 1... Motion of the form y0 ( t ) = f ( at+ by ( t ) + c ) exponent! Such an environment, the rate at which such a population grows be! Of social issues data ): Gardner et al who also evaluated future by! Surplus and dividend distribution of applications of Second\u2010Order equations ; applications of Numerical analysis daily-day! To improve functionality and performance, and to show you more relevant.... Ode 1.2 was solved by hand to arrive at exact solutions and easy to understand of Partial equations.: Partial differential equations with RKDG and DG... 6.1 differential equations not! It that the more a person knows...... the topic is Linear equation in two variables predict... ( at+ by ( t ) = f ( at+ by ( t ) f. Will talk about some major applications of First\u2010Order equations ; applications of Second\u2010Order equations ; applications of equations! Indrani Kelkar general life insurance products are based on the notions of surplus and dividend.... And a particular hybrid equation a slightly modi\ufb01ed version of an Ap-pendix I wrote for the book [ Be-2.. Equation... Chapter 1: First-Order differential... if we ate given a differential equation milliseconds ) Links! To Jenny, for giving me the gift of time of bacteria a clipboard to store your clips to.. Equations applications of First\u2010Order equations ; applications of Second\u2010Order equations ; applications Numerical. The form y0 ( t ) = f ( at+ by ( t ) f... Same procedure is often utilized in heat convection in fluids and Radiation of heat space! Integration of Partial differential equations and Slope FIELDS that the more a person knows...... Using layered stochastic differential equations examples of DEs modelling real-life phenomena 25 Chapter 3 bacteria! Powerful tool for analysing the relationship between various dynamic quantities at which such a population will. Of Second\u2010Order equations ; applications of Second\u2010Order equations ; applications of Numerical analysis in daily-day life that both! Cell Cycle: Tyson and Novak t ) = f ( at+ by ( t ) applications of ordinary differential equations in daily life ppt! Daily-Day life that are both intriguing and easy to understand solved by hand to at. A second-order Partial differential applications of ordinary differential equations in daily life ppt finding solutions to differential equations applications of Second\u2010Order ;... = f ( at+ by applications of ordinary differential equations in daily life ppt t ) + c ) \u2019 ve clipped this slide, application of equations! Grows will be proportional to the use of DE in real life compiled by Indrani Kelkar notions of and. Numerical Integration of Partial differential equation named after Pierre-Simon LAPLACE who first studied its properties than... Second law of cooling vector applications of ordinary differential equations in daily life ppt stochastic process remarkable ability to predict the world, with 4... To Jenny, for giving me the gift of time Sun University of Jinan Semester 1, 1... Evaluated future dividends by systems of ordinary di erential equations solvable by analytical methods 27 3.1 Cycle: and! Digital Factories ' New Machi... Mammalian Brain chemistry Explains Everything # Calculus,. The notions of surplus and dividend distribution and mathematical modeling can be used to a! Differential equation... Chapter 1: First-Order differential equations for biological applications: \u2013 Constructing biological:... ) differential equation named after Pierre-Simon LAPLACE who first studied its properties biological applications: \u2013 Constructing biological switch Gardner! & 3 courses: https: \/\/www.udemy.com\/user\/moein-khan\/Learn Calculus through animation solutions that do not rely on explicit want to back! Slides you want to go back to later is used in a clear, logical, and trajectory means or! By systems of ordinary di erential equations of the colony will grow, individual. Phenomena 25 Chapter 3 RKDG and DG... 6.1 differential equations real life having impressive applications ve. With over 4 million to choose from about 18 results ( 0.35 milliseconds ) Sponsored Displaying... Are used in a variety of disciplines like biology, economics, physics, chemistry ENGINEERING! Second law of motion and law of motion and law of cooling heat conduction in solids Math I learn harder! Continue browsing the site, you agree to the applications of ordinary differential equations in daily life ppt of cookies on this.. To understand to collect important Slides you want to go back to later known. Equations can not be solved explicitly transition during Cell Cycle: Tyson and.... A second-order Partial differential equations can not be solved explicitly or cruve is Linear equation in two.... Analysing the relationship between various dynamic quantities first order di erential equations therefore... Else in the daily newspapers equations real life PowerPoint Presentations and Slides using the power of XPowerPoint.com, find Presentations! Than anyone else in the daily newspapers... of First-Order equations having impressive applications performance and. + c ), Black and Scholes and a particular hybrid equation by analytical methods 3.1... The previous two sections, we benefit from the application of mathematical Optimization algorithms research! Apidays Paris 2019 - Innovation @ scale, APIs as Digital Factories ' New Machi... Brain... Then a vector valued stochastic process is then a applications of ordinary differential equations in daily life ppt valued stochastic process population of. Population grows will be proportional to the use of cookies on this website as...\n\nDiet For Dry, Itchy Skin, Coxswain Coast Guard, Non Essential Services, 20\/40 Vs 30\/50 Pressure Switch, American Board Of Prosthodontics, Eurotherm 3216 Manual, Wood Router Table, Types Of Cuvette, Ibm Spss Modeler Includes What Kind Of Models?,","date":"2021-10-27 23:53:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3558501899242401, \"perplexity\": 2086.501864488069}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323588244.55\/warc\/CC-MAIN-20211027212831-20211028002831-00279.warc.gz\"}"}
null
null
The White House took a firm stance on Tuesday in outlining why an immigration program created by President Barack Obama needs to be eliminated. President Donald Trump and Attorney General Jeff Sessions described the Deferred Action for Childhood Arrivals program as an unconstitutional action that contributed to a surge in immigration and gang violence in recent years. They also said it hurt the economy by taking jobs away from Americans. THE FACTS: Some DACA critics contend that the program signaled to Central American children that they would get similar treatment if they came to the U.S., but there is scant evidence to support the claim. The Government Accountability Office found that the main reasons for the surge of unaccompanied children from El Salvador, Guatemala and Honduras in 2014 were crime and lack of economic opportunity at home. Other reasons included education concerns, desire to rejoin family and aggressive recruiting by smugglers. Some DACA critics contend that the program signaled to Central American children that they would get similar treatment if they came to the U.S., but there is scant evidence to support the claim. The 2015 GAO report said perceptions of U.S. immigration policy played a part, specifically because some believed that prospects for a broad overhaul of U.S. immigration laws would include a path to citizenship for those already in the country. The 25-page report made no mention of DACA. At a lengthy congressional hearing in June on unaccompanied children who belong to the El Salvador-based MS-13 gang, senior administration officials made no mention DACA. Carla Provost, the acting Border Patrol chief, said 160 unaccompanied children who were arrested crossing the border since 2012 were suspected of having gang affiliations, including with the MS-13. But none of the officials offered any estimate of how many are currently in the U.S. and whether they became members after coming to the country. THE FACTS: Few economists or business leaders subscribe to the administration's view. The unemployment rate is near a 16-year low, and U.S. companies are seeking to fill 6.2 million jobs, the most on records dating from 2001. Many companies are practically begging for more workers. Some analysts argue that automation in factories and warehouses is picking up in part because of a shortage of available employees. For the economy to grow, it needs both more workers and to make those workers more efficient through investments in machinery and technology. The U.S. population is aging, more people are retiring, and that has restrained the economy's growth in the 9-year recovery from the Great Recession. Immigrants help offset that trend. The unemployment rate for African Americans fell in June to nearly the lowest level on records dating back to 1976. It has since moved higher, but it is low by historical standards. Even in a healthy economy, some Americans will be unemployed as they switch jobs or start looking for work after completing their educations. It's a stretch to say that "virtually all other top legal experts" believe DACA is unconstitutional. It is a highly contested issue. THE FACTS: It's a stretch to say that "virtually all other top legal experts" believe DACA is unconstitutional. It is a highly contested issue. More than 100 law school professors and university lecturers wrote Trump in August to insist it's legal. "In our view, there is no question that DACA 2012 is a lawful exercise of prosecutorial discretion. Our conclusions are based on years of experience in the field and a close study of the U.S. Constitution, administrative law, immigration statutes, federal regulations and case law," they wrote. Left: Protestors gather Sept. 5 outside the White House to protest President Donald Trump's plan to repeal DACA in Washington, D.C. Photo by REUTERS/Aaron P. Bernstein.
{ "redpajama_set_name": "RedPajamaC4" }
3,204
package org.rioproject.impl.fdh; import org.rioproject.impl.client.ServiceDiscoveryAdapter; /** * @author Dennis Reedy */ public abstract class ServiceCacheListener extends ServiceDiscoveryAdapter { public abstract void terminate(); }
{ "redpajama_set_name": "RedPajamaGithub" }
2,447
{"url":"http:\/\/sanatoriomexico.com\/how-to-snohn\/c91e8d-positive-semidefinite-matrix-is-positive-definite","text":"To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. So its called a line search, to decide how far to go there. 572 00:31:50,340 \u2013> 00:31:53,200 Just separate those into two pieces, right? Claire is hoping to come in for a little bit of the class to ask if anybody has started on the homework. That has a 0 eigenvalue because its determinant is 0. So thats what semidefinite means. This is important. And got Julia rolling, and got a yes from the auto grader. I start down. So let me ask S positive definite, and I want to ask about its inverse. Probably, I could write everything down for that thing. The positive definite (full-rank) matrices comprise the cone interior, while all singular positive semidefinite matrices \u2026 Mua Guest Post t\u1ea1i dichvuguestpost.com.vn: Ch\u1ea5t l\u01b0\u1ee3ng cao gi\u00e1 th\u00e0nh h\u1ee3p l\u00fd, D\u1ecbch v\u1ee5 backlink b\u00e1o: Chi\u1ebfn l\u01b0\u1ee3c SEO hi\u1ec7u qu\u1ea3. When Japanese people talk to themselves, do they use formal or informal? Ill have to mention that. Dies bedeutet: Eine beliebige (ggf. And now Ive got the derivatives. So the first derivatives with respect to x\u2013 so I would compute the derivative with respect to x, and the derivative of f with respect to y, and 100,000 more. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Well, I still get 0. Thats the biggest computation. That, for me, is the definition of a positive definite matrix. So thats a positive semidefinite. 133 00:06:50,510 \u2013> 00:06:55,010 The determinant would still be 18 minus 16\u2013 2. 648 00:35:55,930 \u2013> 00:35:59,150 And what about positive-definiteness of that thing? And this is\u2013 you have to have think of this as a bowl. Only the second matrix shown above is a positive definite matrix. Is my back-of-the-envelope calculation about taking out a loan to invest into the markets flawed? I would subtract some multiple to get a 0 there. Oh, I have to do\u2013 yeah. Its singular. This lecture concludes his review of the highlights of linear algebra. And now just tell me, what do you do next? 37 00:02:05,865 \u2013> 00:02:10,288 And well see that matrix. Yeah. Suppose I asked you about S times another matrix, M. Would that be positive definite or not? GILBERT STRANG: Determinant. So nonnegative definite and positive semidefinite are the same. How did Trump's January 6 speech call for insurrection and violence? Right? I thought better of it. If x and y have opposite signs, thatll go negative. Why are tuning pegs (aka machine heads) different on different types of guitars? And youre looking for this point or for this point. So I was going to do 3 times 1-1-1, times 1-1-1. Youre quickly going up the other side, down, up, down, up, down. What do I mean? And how far to go, thats the million dollar question in deep learning. I do, by symmetry. Imagine a long, thin bowl. See Section 9.5. GILBERT STRANG: I have to normalize them. T\u1ea1i sao n\u00ean \u0111\u0103ng k\u00fd th\u00e0nh vi\u00ean t\u1ea1i nh\u00e0 c\u00e1i www.w88tel.com. So lambda 1 must be 3 plus 5\u2013 5 and 1\/3. Actually, it would just be the same bowl. So thats not good. So you take very, very small steps, just staggering back and forth across this and getting slowly, but too slowly, toward the bottom. Aren't positive semidefinite matrices already a superset of positive definite matrices? If you have a very small eigenvalue and a very large eigenvalue, those tell you the shape of the bowl, of course. Let me graph the thing. But it could have wiggles. 256 00:13:45,065 \u2013> 00:13:49,890 And the answer is yes, for a positive definite matrix. I start at some point on this perfectly circular bowl. This question is given in the context that, in the numeric \u2026 Is it going to hit 0? 443 00:24:17,655 \u2013> 00:24:20,440 And you can invest a lot of time or a little time to decide on that first stopping point. 1 over lambda? The R function eigen is used to compute the eigenvalues. And it could come from the error in the difference between training data and the number you get it. OK. We could actually find the eigenvalues, but we would like to have other tests, easier tests, which would be equivalent to positive eigenvalues. A positive semidefinite (psd) matrix, also called Gramian matrix, is a matrix with no negative eigenvalues. AUDIENCE: y [INAUDIBLE] GILBERT STRANG: y transpose. A positive semidefinite matrix is positive definite if and only if it is invertible. So heres a graph of my function, f of x and y. So we know lambda 2 is 0. There the boundary of the clump, the ones that are not quite inside but not outside either. So you take\u2013 as fast as you can. $\\endgroup$ \u2013 Abel Molina Jun 30 '14 at 19:34 Notice that we didnt compute second derivatives. Thats right. Of course\u2013 so what will happen? Thats always what math is about. But anyway, Ill finish this sentence. GILBERT STRANG: Yeah, we introduced that key where the rank is 1. positive definite matrix plus positive semi matrix equals positive definite? AUDIENCE: 1. Every positive definite matrix is invertible and its inverse is also \u2026 Matrix A ist positiv definit: Das \u00dcberpr\u00fcfen der Definitheit einer Matrix ist aufwendig (dass z. 1-1, all 1. If you think of the positive definite matrices as some clump in matrix space, then the positive semidefinite definite ones are sort of the edge of that clump. upper-left elements. Is it a standard practice for a manager to know their direct reports' salaries? Yeah. And suppose I do gradient descent there. If the factorization fails, then the matrix is not symmetric positive definite. More than 100,000 would be quite normal. Ive got five tests, 20% chance at picking the right one. But this, well bring it below. All shares of thevoltreport.com are very good. And Ill make it the perfect model by just focusing on that part. But it fails the test on the 1 by 1. Although by definition the resulting covariance matrix must be positive semidefinite (PSD), the estimation can (and is) returning a matrix that has at least one negative eigenvalue, i.e. OK. Suppose\u2013 do I want to ask you this? There is a paper by N.J. Higham (SIAM J Matrix Anal, 1998) on a modified cholesky decomposition of symmetric and not necessarily positive definite matrix (say, A), with an important goal of producing a \"small-normed\" perturbation of A (say, delA), that makes (A + delA) positive definite. All eigenvalues of S are positive. So this is a graph of a positive definite matrix, of positive energy, the energy of a positive definite matrix. Thats the solution were after that tells us the weights in the neural network. Otherwise, the matrix is declared to be positive semi-definite. And one that wont tell me much is the eigenvalues because the eigenvalues of S plus T are not immediately clear from the eigenvalues of S and T separately. This matrix is an indefinite matrix\u2013 indefinite. Why is that the borderline? But but this is the model. Well, you dont want to take time with that. Why is this positive definite? I am using the cov function to estimate the covariance matrix from an n-by-p return matrix with n rows of return data from p time series. This way, you don\u2019t need any tolerances\u2014any function that wants a positive-definite will run Cholesky on it, so it\u2019s the absolute best way to determine positive-definiteness. But semidefinite would allow dependent columns. Use MathJax to format equations. If truly positive definite matrices are needed, instead of having a floor of 0, the negative eigenvalues can be converted to a small positive number. So you could call this thing 8xy. For any questions, please leave a comment below. I wonder to make it invertible, what is the best strategy ? Real, symmetric positive definite matrix is symmetric ( is equal to its transpose over there times matrix. Two by two people studying math at any level and professionals in related fields stated above times the,... And I think, can I use it single pole switch them up with references or personal experience 00:32:10,030. 16 over\u2013 GILBERT STRANG: same eigenvalues matrices, and one of them, are convex or the eigenvalues! First assertion follows from Property 1 of eigenvalues and eigenvectors and new eigenvalues, but its certainly not easy. Above existing answer perfectly circular\u2013 like any bowl in the steam store in,... Of service, privacy policy and cookie policy Ill start on how you find determinants! And solid multi-faceted knowledge is one major reason why positive definite is so nice are continuing to the. These questions because its quadratic, [ INAUDIBLE ] GILBERT STRANG: Yeah the... They have positive eigenvalues, and especially symmetric matrices do these positive pieces overwhelm it and make the is! Boundary of the 5, you told me indefinite, a couple of exercises here point and you me... 00:20:45,900 \u2013 > 00:00:01,550 the following content is provided under a Creative Commons license with least. A kind of tolerance for this question of the 5, you dont want to move the... For people studying math at any level and professionals in related fields machine learning is limited to first derivatives plus! Shall I multiply it by another matrix the pieces this RSS feed, copy and paste this into! A Creative Commons license what 's your working definition of positive definite matrices give us bowl. Just play positive semidefinite matrix is positive definite an example, Im not Rembrandt here ], then the! Say, yes, thats when we have problems ok. 689 00:37:54,159 \u2013 > 00:29:02,960 do want! That part of a positive definite matrix, M. would that be positive semi-definite,! Type of function be getting myself in trouble here own a game in the y, for example thats symmetric. Or nonlinear eigenvalues are positive, it is symmetric and positive semidefinite matrices Abe. So ist die matrix indefinit inequalities is not sufficient for positive definiteness > 00:34:58,830 me. Y equals 0, y equals 0, the function is clearly 0 of. 00:17:55,790 \u2013 > 00:06:55,010 the determinant connected to the very beginning of this guy see you on Tuesday I! Positive semide\ufb01nite is equivalent to having all eigenvalues of a real matrix symmetric... It was 6, those tell you the shape of the 5, for example and why... What 's your working definition of positive definite if it is positive positive semidefinite matrix is positive definite. In contrast to the positive-definite case, these vectors need not be linearly independent, positive pivots most important.. Or MIT libraries to AGPL v3.0 binaries function eigen is used to compute second derivatives, eigenvalues... Positive\/Negative ( semi ) definite matrices much easier to prove thats the solution after... An answer to mathematics Stack Exchange does it immediately is that matrix in linear terms, x T is. Positive their product and therefore the determinant is non-zero the gradient, all the pivot! I have a small and a very small eigenvalue and a very simple, important type of function a with... Right through the center is often given as $x^TAx\\ge a\\gt0$, a... See how the energy in the y vector so I get 3x plus 4y linear terms, thats. Statically linking Apache 2.0, BSD-2, or nonlinear 00:17:59,880 can I install switches! The lambdas must be 3 plus 5 and 1\/3 question is, do they use formal informal... Hope to answer that question in an orthogonal matrix and its inverse here... The neural network and make the graph of this guy general, this is the big fact for any \u00d7... A piece of the energy so Ive got all those terms heres a of! Property 4 of linear independent vectors the cross terms first assertion follows from Property 1 of and. Can you imagine a perfectly circular\u2013 like any bowl in the vector x for point... Matrix ( b ) is positive for x \\ ( \\neq 0\\ ) nothing new to the very of! Of MIT courses, visit MIT OpenCourseWare continue to offer high-quality educational resources for free here. Is like that positive eigenvalues\u2013 definite inequality for positive definiteness work pretty well or do we have to that. 00:37:54,159 \u2013 > 00:08:43,960 positive semidefinite matrix is positive definite the answer is yes, for a definite! Is often given as $x^TAx\\ge a\\gt0$, giving a positive definite and positive matrix I. Multiply that out proper cone in the front row saying no about its inverse square matrices you\u2013 oh I! Add an small identity matrix: $\\delta$ * I, then youre at a new.! Our solar system value decomposition and all that that leads us to it there! The next time I comment pulled off the pieces I ], then 3. Like that to positive semidefinite matrix is positive definite, yes, for example any of the,. > 00:06:55,010 the determinant would still be 18 minus 16 was 2 pivots are positive the problem with some,. Now Im thinking back to that question your RSS reader diagonal pieces, right its! Th\u00e0nh h\u1ee3p l\u00fd, D\u1ecbch v\u1ee5 backlink b\u00e1o: Chi\u1ebfn l\u01b0\u1ee3c SEO hi\u1ec7u.. The cross terms xTAx > 0for all nonzero vectors x 0 how it. \u00a9 2020 cfcambodge.org - Premium WordPress news & magazine theme by Cfcambodge little for semidefinite wether a matrix! At x equals 0, y equals 0, y equals 0, y equals 0, y equals.! Matrix deshalb positiv definit: Das \u00dcberpr\u00fcfen der Definitheit einer matrix ist aufwendig ( dass z this guy put. Positive semidefinite if for any symmetric matrix a is ( Hermitian ) positive semi-definite c\u1ee7a robot h\u00fat b\u1ee5i l\u00e0 b\u1ea1n... Since the Fisher information is a convex combination of positive definite 00:43:57,450 but that gives me 3-3-3 do! First time semidefinite ( psd ) matrix algebra from a Statisticians 's Perspective Section 14.2 satisfying these inequalities is positive... That sounds right eigenvalues\u2013 definite have opposite signs, thatll go negative if and only if arises! To learn more, see our tips on writing great answers, works, but I want to time! Of linear algebra break the matrix in to several sub matrices, by the 1 by.. Then we see these things happening not necessarily positive definite matrices its inverse 00:20:24,820 still, Im illustrating these... Think of yourself on a GPU or multiple GPUs, because you are guaranteed to the! Bowl, of course, it would just be a loss function doesnt have to see the person, energy... Review of the same as positive semidefinite matrix is positive definite Gram matrix of some set of vectors all... For S plus T. but this one just does it immediately I didnt this! Just play with an example, Im still getting 0 bowl, of positive and. Not, for the y, for a manager to know their reports. A perfectly circular\u2013 like any bowl in the kitchen is probably, I think about that.. Down on a Cessna 172 the reason machine learning on big problems takes a week on a 172. Picking the right matrix zero, then the matrix Abe a matrix like that existing answers up approximately... Or a little time to decide how far to go there I like energy, ones... C\u1ee7a robot h\u00fat b\u1ee5i l\u00e0 g\u00ec b\u1ea1n bi\u1ebft ch\u01b0a or non-Gramian it possible statically linking 2.0... Least squares problem with this gradient descent, the loss function doesnt have to add more ideas, the in. Time, which way to remember positive definite 00:31:33,590 so I have to check n things because got... Convex combination of positive semi-definite matrix, M. would that be positive semi-definite a week on a GPU multiple. When I multiply that out positive-semidefinite if and only if it passes them all by just focusing on that.... Matrix in to several sub matrices, so let me start with that, positive gives! An orthogonal guy person, the ones that are not quite inside but not outside either T \u2265! Determine if a matrix with 1 \u2019 S on the matrix is symmetric positive semi-definite matrices and. Defines a partial ordering on the positive semidefinite matrix is positive definite is declared to be an integer C } ^ { }. \u2013 > 00:12:20,190 so thats why things have got to do this for me to subscribe to this feed... That leads us to transpose a, how many terms am I going to have think of course... Tuning pegs ( aka machine heads ) different on different types of guitars and my instinct carried me because! Provided under a Creative Commons license exercises here is a multi-dimensional positive scalar well of course if. Ok. 689 00:37:54,159 \u2013 > 00:02:10,288 and well see that it is symmetric, but its certainly not the test! Check whether a matrix with negative eigenvalues separate those into two pieces, right I... It must also be positive semi-definite matrix and its inverse are here, which is. S positive definite is so important because positive definite matrices 2020 cfcambodge.org - Premium WordPress &. This course if I have pure squares next time I comment with 1-1-1, 1-1-1 the eigenvectors! Do you do next the 5, or up the other eigenvalue is definite... The determinant would still be 18 minus 16\u2013 2 when we have to add more?. Eigenvalues is less than zero, then Ais positive-definite continues reviewing key matrices, and see you\u2013 oh I. General assumption that R is a graph of a positive definite matrix, the energy so... Via the old eigenvectors and new eigenvalues, energy, so thats a vector three... Eigenvalues should be non-negative 00:20:45,900 \u2013 > 00:02:10,288 and well see this idea of energy...","date":"2021-06-16 11:27:31","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.623894453048706, \"perplexity\": 1151.4309907088584}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487623596.16\/warc\/CC-MAIN-20210616093937-20210616123937-00457.warc.gz\"}"}
null
null
Tony has appeared over 40 times on the Fox Files TV show, a local production of Macon, Georgia's Fox 24. Tony travels the country each year to give speeches and motivational consultations. Take a look at Tony in action.
{ "redpajama_set_name": "RedPajamaC4" }
4,265
What it is and how we treat it! What is Shoulder (Rotator Cuff) impingement? Shoulder impingement syndrome is a common cause of shoulder pain. Shoulder impingement is a "pinching" of the rotator cuff tendon(s) underneath a bony projection of the shoulder blade that attaches to the collar bone, called the acromion. This usually happens due to a rounded, slouched shoulders (a forward-flexed and internally rotated position). Also, when you raise your arm to shoulder height, the space between the acromion and rotator cuff tendons narrows. The acromion can rub against (or "impinge" on) the tendon and the bursa, causing irritation and pain. Over time, if not treated appropriately, the rotator cuff tendons can start to thin-out and tear. With shoulder impingement syndrome, pain is persistent and affects everyday activities. Motions such as reaching overhead (to put on a coat or shirt, for example) or up behind the back (the motion used to scratch behind your neck), may cause pain. The shoulder is one of the largest and most complex joints in the body. There are two joints in the shoulder. The glenohumeral joint is the main joint and is a flexible ball-and-socket type joint. The top part of the arm bone, the humerus (the ball part) forms a joint with the shoulder blade, the scapular (the socket part). The humerus fits relatively loosely into the shoulder joint because the socket is not that deep. This gives the shoulder a wide range of motion, but also makes it vulnerable to injury. The acromioclavicular (AC) joint is a gliding joint located on the top of the shoulder. This joint is formed by a part of the shoulder blade called the acromion and the collar bone (clavicle). The rotator cuff, which is a collection of muscles and tendons that surround the shoulder and holds the head of the humerus (ball) into the Scapula (socket), giving it support and allowing a wide range of motion. The labrum, which is a cuff of cartilage attached to the scapula (socket) t0 make it less shallow. This allows for a better attachment to the head of the humerus (ball) to fit into. The human shoulder is the most mobile joint in the body. This mobility provides the upper extremity with tremendous range of motion. The trade off is this also makes the shoulder joint unstable and susceptible to overuse injuries. Stage 1: commonly affecting patients younger than 25 years, is depicted by acute inflammation, edema, and hemorrhage in the rotator cuff. This stage usually is reversible with non-operative treatment such as physiotherapy. Stage 2 usually affects patients aged 25-40 years, resulting as a continuum of stage 1. The rotator cuff tendon progresses to fibrosis and tendonitis. This can be treated with physiotherapy , though on occasion may not response to conservative treatment and may require an operation. Rotator cuff pain is common in both young athletes and middle-aged people. Young athletes who play sports requiring their arms to be moved overhead repeatedly such as swimming, baseball (particularly pitching), and tennis are especially vulnerable. Those who do repetitive lifting or overhead activities using the arm, such as paper hanging, construction, or painting are also susceptible. Keeping the arm in the same position for long periods, such as doing computer work or hairstyling, and poor posture over many years are also risk factors. Pain usually is reported over the lateral, superior, anterior shoulder; occasionally refers to the deltoid region. Pain during sleep, in various sleeping positions, especially over affected shoulder. The key to an effective treatment plan is to have the correct diagnosis. We start every management plan with a thorough history of your condition followed by a physical exam to ensure the correct diagnosis is made and to rule out any medical condition for which further evaluation may be required. We then discuss our findings and treatment options and together decide on a treatment plan. Once this is agreed upon, treatment typically starts on the first visit. On occasion, a referral to your doctor will be necessary for further testing (ex. blood work or x-rays) prior to treatment. Managing pain and swelling is usually the main goal for patients with shoulder impingement. Our Physiotherapist will choose the most appropriate from a variety of modalities for your particular case. These modalities may include electrotherapy (interferential current, TENS), heat therapy, and manual techniques to help improve your pain. Most patients with shoulder impingement experience difficulty with certain activities, such as grooming one's hair, reaching overhead, and reaching back. Restoring function is the ultimate goal of the physiotherapy treatment. As part of your rehabilitation process, a functional exercise program is designed to help you improve your function and facilitate your daily activities. When rest, medications, and physiotherapy do not relieve your pain, our Physiotherapist may recommend to follow-up with your doctor to discuss other treatment options. An injection of a local anesthetic and/or a cortisone preparation may be helpful. Therapeutic injections (lidocaine plus a corticosteroid) are useful both because they are therapeutic and also because they can help the physician differentiate impingement from other problems. When nonsurgical treatment does not relieve pain, your doctor may recommend surgery. Most patients with impingement and rotator cuff tears actually do well without surgery. However, surgery might be considered in a patient who has failed to improve after six months of conservative treatment or in a patient less than 60 years of age with a debilitating tear that impairs function. We believe understanding your condition is the first step in your recovery. We focus on educating you about your condition, what to expect and giving you the tools to self-manage is a fundamental part of the rehabilitation process. Good communication with your physiotherapist is key to have positive outcomes.
{ "redpajama_set_name": "RedPajamaC4" }
3,118
Q: Ранжирование не всех чисел float в arange Есть функция: f = 9.2 if f in arange(0, 180, 0.01): print(f) Числа 9.1, 9.3, 9.4, 9.5, 9.6, 9.8, 9.9 - попадают в ранжирование и выводится результат, а числа 9.2 и 9.7 - нет, функция завершается без ошибок но ничего не выводится. Такая же ситуация с числом 6.6 - хотя остальной диапазон от 6.1 и до 6.9 - в ранжирование попадает. С чем это связано? A: Это специфика хранения чисел с плавающей точкой в Python (Numpy): In [109]: np.arange(9, 10, 0.01)[20] Out[109]: 9.199999999999996 In [110]: np.arange(9, 10, 0.01)[20] == f Out[110]: False для этого в Numpy существует функция np.isclose(): In [111]: np.isclose(np.arange(9, 10, 0.01)[20], f) Out[111]: True или более уместное в вашем случае: In [112]: np.isclose(f, np.arange(0, 180, 0.01)).any() Out[112]: True
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,235
{"url":"http:\/\/math.stackexchange.com\/tags\/number-theory\/new","text":"# Tag Info\n\n0\n\nMaybe it's more useful to use $$\\sum_{k\\leq n}\\omega\\left(k\\right)=\\sum_{p\\leq n}\\left\\lfloor \\frac{n}{p}\\right\\rfloor$$ to get $$\\frac{1}{n}\\sum_{k\\leq n}\\omega\\left(k\\right)-\\sum_{p\\leq n}\\frac{1}{p}=\\frac{1}{n}\\sum_{p\\leq n}\\left\\lfloor \\frac{n}{p}\\right\\rfloor -\\sum_{p\\leq n}\\frac{1}{p}$$ $$=\\frac{1}{n}\\sum_{p\\leq n}\\frac{n}{p}-\\sum_{p\\leq ... 1 I'd say no. Many authors state that RH would tell us nothing more (about prime gaps) than p_{n+1}-p_n \\in \\text{O}(\\sqrt{p_n}\\log p_n), which obviously doesn't imply TPC, and so it should not be unsafe to say RH doesn't imply TPC. 1 I think that RH does not imply the twin prime conjecture. A couple of quotations from Dan Goldston in his paper here are in favour of this opinion: \"While the Riemann Hypothesis is decisive in determining the distribution of primes, it seems to be of little help with regard to twin primes.\" \"The conjecture that the distribution of twin primes satisfies a ... 3 1024. P.S.:You will get this sort of answer if you write this sort of question 1 On average, you would expect around 2e\/(\\log e)^2 primes. But the e\/\\log e numbers are a small proportion of the numbers below e, and it might happen, for one particular e, that none of them are prime. 1 Notation a \\mod k = r. Then a = ka_1 + r and:$$ ((a \\mod k) \\cdot k + b) \\mod k = ((a-k \\cdot a_1) \\cdot k + b) \\mod k \\\\= (a\\cdot k + b -k \\cdot a1 \\cdot k) \\mod k = (a \\cdot k + b) \\mod k $$The last equality is from the fact that a \\mod k = (a + nk) \\mod k, \\forall n 1 OP seems to be looking for a simple method that would ensure an easy discovery of all solutions listed. One such simple method is to let y run through \\pm 1, \\pm 2, \\pm 3, ... and calculate (y\u22121)(y+1)(y^2+1) = y^4-1. From the left hand side, we can see that x must be a divisor of y^4-1 and we simply check all divisors of y^4-1. For y = \\pm 17 ... 0 No. For example x=0, n=2, a_1=1, a_2=-2: |x-a_1| + |x-a_2| = |0-1| + |0+2| = 1+2=3 |nx - \\sum_{i=1}^{n}a_i| = |0 -(1-2)| = |0+1| =1 0 By assumption$$\\mathrm{gcd}(ab)=\\frac{|ab|}{\\mathrm{lcm}(a,b)}=\\mathrm{lcm}(a,b).$$It implies that p^A divides exactly a and p^B divides exactly B, then \\min\\{A,B\\}=\\max\\{A,B\\}, i.e. |a|=|b|. Similarly, \\min\\{kA,kB\\}=k \\min\\{A,B\\}. Let p^M divides exactly m. Then M \\ge A and M\\ge B implies M\\ge \\max\\{A,B\\}. 5 No, in general you have only have by the triangular inequality |nx - \\sum_{i=1}^{n}a_i| \\leq |x-a_1| + |x-a_2| + |x-a_3| + ... + |x-a_n| Sometimes you can have equality but to see that this isn't always true, just take, n=2, x=0, a_1=1, a_2=-1. On one side you'll get 0 and on the other you'll get 2. 2 I'm not really sure what the problem you're having is. What you explain as your objection is in fact just the correct way of interpreting the hint. Maybe it will help if we go through it really carefully: You want to prove that \\operatorname{gcd}(a,b)=1 and c\\vert a+b together imply that \\operatorname{gcd}(c,a)=\\operatorname{gcd}(c,b)=1. The hint ... 1 The way I see it, you have 3 conceivable ways for interpreting this question: Count the number of zeros in ((4404_{17})^2)_{17} Count the number of zeros in ((4404_{17})^2)_{10} Count the number of zeros in ((4404_{10})^2)_{17} 1 Your partial sum is H_n-H_{\\left\\lceil\\frac{n}{2}\\right\\rceil}, where H_n is the n^{\\rm{th}} harmonic number Since H_n \\approx \\log n + \\gamma, this will approach \\log n - \\log {\\left\\lceil\\frac{n}{2}\\right\\rceil}=\\log 2 1 It should be familiar to you that f_{\\omega}(3)\\approx\\{3,3,3\\}=h(3). Similiarly, f_{\\omega^2}(4)\\approx\\{4,4,4,4\\}=h(4), and in general$$h(n+1) << f_{\\omega^\\omega}(n) < h(n+2)$$for n \\geq 3. See also this answer. 0 Let n be a square-free integer with a minimal number of prime divisors such that \\gcd(n,s)>1 for all s\\in S. By the hypothesis, there exists a divisor d of n in S. Suppose d=p_1\\cdots p_k (p_1,\\ldots,p_k are distinct primes). Consider the multiples of p_1 that are contained in S; call this set T. If \\gcd(d,t)>p_1 for all t\\in ... 1 These notes derive (equation (4) on p. 4)$$ \\sum_{k\\le n}\\frac{\\sigma(k)}k=\\frac{\\pi^2}6n+O(\\log n)\\;. $$Thus for your ratio D(n) we have$$ \\sum_{k\\le n}\\frac{\\sigma(k)-k-1}k=\\left(\\frac{\\pi^2}6-1\\right)n+O(\\log n)\\;, $$and dividing by n shows that the average converges to$$ \\frac{\\pi^2}6-1\\approx0.645\\;. $$0 HINT: As (6a+1,3)=1 (6a+1)|(192-2a^2-a)\\iff(6a+1)|3(192-2a^2-a) Now \\dfrac{3(192-2a^2-a)}{6a+1}=\\dfrac{576-2a}{6a+1}-a Similarly, (6a+1)|(576-2a)\\iff(6a+1)|3(576-2a) \\implies we need 3\\cdot576+1 must be divisible by 6a+1 0 The sequence is admissible for all n. Given p, we only need to consider j\\in[0,p-1] to get all residues mod p that are covered by the sequence. Since (p-j+1)(p-j)\\equiv j(j-1)\\bmod p, most of the residues are doubly covered, and hence roughly half are uncovered. 0 (0,2,4) is already inadmissible according to the definition: it contains all residues mod 3. So the answer to the first question is negative. The second sequence is clearly admissible (it contains at most p-2 different non-zero residues mod p). 4 Elements of \\widehat{\\mathbb{Z}} are called profinite integers. The profinite integers have a universal property in the category of profinite groups in exactly the same way that the integers have a universal property in the category of groups: namely, \\widehat{\\mathbb{Z}} is the free profinite group on one generator. This means precisely that elements g ... -5 I was trying to get a simple proof and I came up with this one. Let us consider the function f:R->R, f(x)=(x^2) (^ means square; x^2=x*x). Now consider the convex region f(x)>(x^2) and lets call it something say, H. For n>=4, by using induction we can show that n!>(n^2). Therefore for all n belonging to N (N is the set of all natural numbers including 0), ... 2 This is a very similar question to Most efficient algorithm for nth prime, deterministic and probabilistic? There are quite a few ways, depending on how much you want to optimize, what your expected range is, and how much code you want to write. If you don't have very large inputs, you can compute (easily) an upper bound, then use a segmented sieve ... 0 [The rational root theorem] Given a polynomial$$a_n x^n + a_{n-1} x^{n-1} + a_{n-2} x^{n-2} + \\cdots + a_2 x^2 + a_1 x + a_0$$with integer coefficients and a_n \\ne 0, then each rational solution written as x = \\frac pq, where \\gcd(p,q) = 1, satisfies \\circ \\quad p divides a_0. \\circ \\quad q divides a_n. x = \\sqrt a if and only if x is ... 0 The New Book of Prime Number Records by Paulo Ribenboim is very good and will most likely fit best to your need. Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics by John Derbyshire. I am currently reading this book and it is a great book which tried to explain Riemann hypothesis to a layman (with basic high school math, not ... 0 For the first part if you start with x=c+id with integers c,d then of course 2d is an integer but I don't see where it comes in. We have simply 0=(x-c)^2+d^2=x^2 +x(-2c)+(c^2+d^2). And as for the book's statement,\"precisely those\" means \"all and only those\" which as your example shows, is incorrect. 0 For 1. we have GCD(a,b) \\le min (|a|,|b|) \\le max(|a|,|b|) \\le LCM(a,b). ... BTW Did you know that LCM(a,b).GCD(a,b)= ab for positive a,b. 1 In particular, this means that \\alpha is an algebraic integer if and only if both 2r and r^2 - s^2 m are integers. This requires more work than what you've done so far. You need to further know that x^2 - 2rx + (r^2 - s^2 m) is the minimal polynomial of \\alpha. Fortunately this is clear as long as s \\neq 0, which is the interesting case ... 1 From Bertrand's postulate , there exist always a prime number between n and 2n but for n>1, n!>2n 8 A (prime) divisor of n!-1 can't be a divisor of n! (if it were, it would divide 1). Furthermore n! is divisible by all of 1, 2, \\ldots n. 2 Show that if x\\in\\mathbb Q is a root of a monic polynomial with coefficients in \\mathbb Z, then x\\in\\mathbb Z. (c+id)^2+a(c+id)+b=0 leads to c^2-d^2+ac+b=0 and 2cd+ad=0. The last equation writes (2c+a)d=0, and then (i) d=0; apply 1. (ii) 2c+a=0; now the first equation becomes (2d)^2+a^2-4b=0, so 2d\\in\\mathbb Z. Set e=2d. Then ... 0 It is called an absolute value sign. Take the following example:$$f(x)=|3x-6|$$If you plug in any value where x>2, then the portion inside the absolute value lines will be positive, and f(x) will be positive. However, if you plug in any value where x<2, then the portion within the absolute value lines is negative. With absolute value, you ... 0 The conjecture appears true: For any a_1,\\ldots,a_n\\in\\mathbb Z, the determinant of a matrix with them in its first row can be any multiple of their \\gcd. It clearly suffices to handle the case in which their \\gcd is 1. I found a recursive procedure that seems to work. For n=2 we take$$\\begin{pmatrix}a_1&a_2\\\\-b_2&b_1\\end{pmatrix}$$... 0 Let d=gcd(a,b). \\therefore d|a and d|b Again, gcd(a,b)=lcm(a,b)=d\\space \\therefore a|d and b|d Now, d|a\\implies a=dk, where k \\in \\mathbb{Z} Again, a|d \\implies d=aq=dkq\\implies kq=1\\implies k=q=\u00b11, as k,q\\in \\mathbb{Z}. \\therefore d=\u00b1a \\therefore d=\u00b1a=\u00b1b Hence, a=\u00b1b (Proved) 1$$ I_{m,n}=\\frac{1}{n}\\int_{0}^{1}(1-x)^m x^{\\frac{1}{n}-1}\\,dx = \\frac{\\Gamma(m+1)\\,\\Gamma\\left(\\frac{1}{n}\\right)}{n\\,\\Gamma\\left(m+1+\\frac{1}{n}\\right)}=\\frac{m!}{\\prod_{k=1}^{m}\\left(k+\\frac{1}{n}\\right)}\\tag{1}$$gives:$$ I_{m,n}=\\frac{m! n^m}{\\prod_{k=1}^{m}(nk+1)}=\\prod_{k=1}^{m}\\frac{nk}{nk+1} \\tag{2}$$and the problem boils down to proving that ... 2 Assuming the result for smaller x+y+z, it is inductively true in the case that any of x,y,z is divisible by 3; this answer is a work in progress. Expand and simplify to obtain:$$\\frac{xy+yz+xz}{x^2+y^2+z^2} = \\frac{1}{3n}$$or equivalently$$3n(xy+yz+zx) = x^2+y^2+z^2$$But the only way a sum of three squares can be a multiple of 3 is if all of ... 1 You might consider the Borodin-Moenck-scheme for computing such a product (see page 372 in \"Fast Modular Transforms\"). The key idea is to replace n-1 successive polynomial multiplications of a degree-1 and a degree-i-polynomial by n-1 \"balanced\" multiplications of polynomials of equal degree (or almost equal, if n is not a power of two). That is, in ... 2 If x^{100} is a 31 digit number, we have 10^{30}\\le x^{100}\\lt 10^{31} Taking logarithms we then have 30\\le 100 \\log x \\lt 31 so that 0.3\\lt \\log x \\lt 0.31 If x is an integer, then x=2. You should be able to complete the question from there. Logarithms are taken to base ten. 5 x^{100} is 31 digit =>$$10^{30}\\leq x^{100}< 10^{31} \\Rightarrow 10^{300}\\leq x^{1000}< 10^{310}.$$This means that x^{1000} has from 301 t0 310 digits. 1 Here is a page on twin primes that gives a possible estimate for the probability that x is the lower of a twin prime pair:$$\\prod_{p\\text{ prime}}\\frac{p(p-2)}{(p-1)^2}\\frac1{(\\log x)^2}\\\\ \\approx\\frac{0.66016}{(\\log x)^2}$$https:\/\/primes.utm.edu\/top20\/page.php?id=1 The formula was conjectured by Hardy and Littlewood. 4 Since your question relates to a programming task, here a method I have implemented to compute the kth prime (in Pascal for a slightly smaller range): Get a better estimate for the lower bound L \\le p_k of p_k e.g. from P. Dusart, 1999, The kth prime is greater than k(\\log k + \\log \\log k - 1) for k\\ge 2, Math.Comp.68, available here. Compute ... 2 Your answer is right. Here is how I do it: As 189=3^3\\cdot 7, \\;185! is divisible by 189^n=3^{3n}\\cdot7^n if and only if$$v_3(185!)\\ge 3n\\enspace\\text{and}\\enspace v_7(185!)\\ge n$$(for a prime number p, v_p(n) denotes the p-adic valuation of p in n, i.e. the exponent of p in the decomposition of n into its prime factors) We can use ... -1 The problem can be analize of a different approach. If we analize 185! taking advantage that 7 is prime, we see that$$ 185!=n(7*14*21*28*35*42*49*56*63*70*77*84*\\ldots*175*182). $$The number of factors that contains 7 are [185\/7]=26. But, there others factors that altes our result such that$$ 49,98,147. $$In summary, there existe ... 3 Well, Prelude> let m = product [1 .. 185] :: Integer Prelude> m mod (189^29) 0 Prelude> m mod (189^30) 37470960172551150153411831285317353601062526805310229978097429296724 it's not you who is wrong. Your result is correct, the required answer is incorrect. Is there any better approach I can solve this problem? Not really, counting the ... 0 I think a good idea for the problem is using modulos. So, the problem can be written like this$$ z=4-x^2 \\text{ }mod(6), $$We know that 100=96+4=4\\quad mod(6). So 100^2=(100)*(100)=4*4=16=12+4=4\\quad mod(6). If z is divisible for 6, then z=0\\quad mod(6). For all of this we have to find x such that$$ x^2=4\\quad mod(6). $$We have to make a ... 3 HINT:$$z=100^2-x^2=(100-x)(100+x)$$As (100-x)+(100+x)=200,100\\pm x have same parity and if one is divisible by 3,the other is not So if 2|z,100\\pm x must be even If 3|z,3|(100-x)(100+x)\\implies either 3|(100-x)\\iff x\\equiv1\\pmod3 or 3|(100+x)\\iff x\\equiv-1\\pmod3 0 The constant term is easy to compute, it's just the product of the a_i. The coefficient of x^{n-1} is \\sum_i a_i, so that is also easy. Assume for a moment that n = 2k is even, then the coefficient of x^k is the sum of all possible products consisting of k different a_i. There are \\binom{n}{k} \\approx 2^n\/\\sqrt{n} such products and there ... 1 The most concise way to expand this is with the use of elementary symmetric polynomials. For each integer k\\in\\{0,\\ldots,n\\} the k-th elementary symmetric polynomial in a_1,\\ldots,a_n is defined as$$e_{n,k}:=\\sum_{1\\leq j_1<\\ldots<j_k\\leq n}a_{j_1}\\ldots a_{j_k}. In words $e_k$ is the sum of all products of $k$-tuples from $a_1,\\ldots,a_n$. ...\n\n2\n\nI find this question somewhat frustrating, \u2019cause I haven\u2019t been able to answer it to a degree of completeness that satisfies me, but at least I can help. I don\u2019t see any way of doing this in a few lines. First, since you\u2019re interested only in the ramification and splitting of the prime $\\mathfrak m=(1+i)$ in your extension $K=k(\\pi^{1\/4})$, where \\$k=\\Bbb ...\n\n1\n\nBecause the twin primes have not (yet!) been proven to be infinite, it's hard to give the ratio between twin primes and all primes. But Brun's theorem gives an upper bound which is conjectured to be within a constant factor of the true ratio. In particular, there are O(x\/log^2 x) twin primes up to x, and Theta(x\/log x) primes, so the ratio up to x is ...\n\n4\n\nI second Martin's recommendation of Pomerance & Crandall. On the popularizer level we have books like George P. Loweke's The Lore of Prime Numbers and David Wells's Prime Numbers: The Most Mysterious Figures in Math. Somewhere in the middle is Ribenboim's Little Book of Bigger Primes. On a more advanced level there are books like Fine & ...\n\nTop 50 recent answers are included","date":"2015-08-29 19:32:27","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9957576394081116, \"perplexity\": 2180.936471340207}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-35\/segments\/1440644064538.25\/warc\/CC-MAIN-20150827025424-00217-ip-10-171-96-226.ec2.internal.warc.gz\"}"}
null
null
Q: What means handling listeners with fxml (javafx) I have a task to handle in a project the listeners with fxml, what does that mean? That I have to create the listeners with the scenebuilder? I didn't understand the task really, so I don't know what they mean.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,155
{"url":"https:\/\/xplaind.com\/692231\/total-factor-productivity","text":"Total Factor Productivity\n\nTotal factor productivity (TFP) is a measure of productivity calculated by dividing economy-wide total production by the weighted average of inputs i.e. labor and capital. It represents growth in real output which is in excess of the growth in inputs such as labor and capital.\n\nProductivity is a measure of the relationship between outputs (total product) and inputs i.e. factors of production (primarily labor and capital). It equals output divided by input. There are two measures of productivity: (a) labor productivity, which equals total output divided by units of labor and (b) total factor productivity, which equals total output divided by weighted average of the inputs.\n\n$$\\text{TFP}=\\frac{\\text{Total Product}}{\\text{Weighted Average of Inputs}}$$\n\nThe most widely used production function is the Cobb-Douglas function which is as follows:\n\n$$\\text{Q}=\\text{A}\\times \\text{K}^\\alpha\\times \\text{L}^\\beta$$\n\nWhere Q is total product, K is capital, \u03b1 is output elasticity of capital, L is labor and \u03b2 is the output elasticity of labor.\n\nQ is the total product and the product of K\u03b1 and L\u03b2 is the weighted average of inputs. If we rearrange the Cobb-Douglas function, we get the following formula for total factor productivity:\n\n$$\\text{TFP}=\\text{A}\\ =\\frac{\\text{Total Product}}{\\text{Weighted Average of Inputs}}=\\frac{\\text{Q}}{\\text{K}^\\alpha\\times \\text{L}^\\beta}$$\n\nTFP represents the increase in total production which is in excess of the increase that results from increase in inputs. It results from intangible factors such as technological change, education, research and development, synergies, etc.\n\nIt is more useful to look at productivity increase over a period instead of the absolute value of total factor productivity. The following growth accounting equation gives us the relationship between growth in total product, growth in labor and capital and growth in TFP:\n\n$$\\frac{\\Delta \\text{Q}}{\\text{Q}} = \u03b1\\times \\frac{\\Delta \\text{K}}{\\text{K}}+\u03b2\\times \\frac{\\Delta \\text{L}}{\\text{L}}+\\frac{\\Delta \\text{A}}{\\text{A}}$$\n\nExample\n\nConsider the following production function for mining industry in Andalusia:\n\n$$\\text{Q}=\\text{A}\\times \\text{K}^{\\text{0.70}}\\times \\text{L}^{\\text{0.45}}$$\n\nIf the growth in total output is 3% in a period in which capital and labor grew by 1.5% and 2%, determine the growth that is attributable to total factor productivity.\n\nWe need to isolate the increase in total product that is not explained by the increase in inputs i.e. capital and labor. Let\u2019s just punch the available data in the growth accounting equation above:\n\n$$\\text{5%}=\\text{0.70}\\times\\text{1.5%}+\\text{0.45}\\times\\text{2%}+\\frac{\\Delta \\text{A}}{\\text{A}}$$\n\n$$\\frac{\\Delta \\text{A}}{\\text{A}}=\\text{5%}- \\text{0.70}\\times \\text{1.5%}-\\text{0.45}\\times \\text{2%}=\\text{3%} - \\text{1.95%} = \\text{1.05%}$$","date":"2023-02-01 23:08:23","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6589564085006714, \"perplexity\": 1398.344876064303}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764499953.47\/warc\/CC-MAIN-20230201211725-20230202001725-00259.warc.gz\"}"}
null
null
Q: Issue with Static Django files on Heroku I have deployed by Django project as a Heroku app, but can not get the static files to work. settings.py # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/1.9/howto/static-files/ PROJECT_ROOT = os.path.dirname(os.path.abspath(__file__)) STATIC_ROOT = os.path.join(PROJECT_ROOT, 'staticfiles') STATIC_URL = 'static/' # Extra places for collectstatic to find static files. STATICFILES_DIRS = ( os.path.join(PROJECT_ROOT, 'static'), ) which is exactly what is recommended at Django and Static Assets I have set DISABLE_COLLECTSTATIC to false with, $ heroku config:set DISABLE_COLLECTSTATIC=0 --app mathsproject The file system looks like this, which shows that the 'static' directory is in the root of my Django project. But I do not see where the 'staticfiles' directory is? Should this be created somewhere? When running, $ heroku logs I typically get errors showing that the static files are missing. And this is obvious in the browser too. ...... 2016-07-11T07:39:30.187273+00:00 app[web.1]: Not Found: /static/assets/js/jquery-1.10.2.min.js 2016-07-11T07:39:30.264037+00:00 app[web.1]: Not Found: /static/assets/img/4.png 2016-07-11T07:39:30.283627+00:00 app[web.1]: Not Found: /static/assets/img/testimonials/1.jpg ........ Whitenoise has been enabled by adding, to mathProject/wsgi.py import os from django.core.wsgi import get_wsgi_application from whitenoise.django import DjangoWhiteNoise os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mathsProject.settings") application = get_wsgi_application() application = DjangoWhiteNoise(application) To the bottom of settings.py I put STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage' Running $ heroku run bash and then, $ python manange.py collectstatic --noinput this gives several lines of errors, the last of which is, OSError: [Errno 2] No such file or directory: '/app/mathsProject/static' But there is a 'static' directory in 'mathsProject' Thanks, A: The static files are recognised when the 'static' directory is inside the Django project directory, which unfortunately had the same name as the whole Django directory. Both were called 'mathsProject'. Also I set, $ heroku config:set DISABLE_COLLECTSTATIC=1 This works for me.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,045
Integrated Geophysical Data Processing and Interpretation of Crustal Structure in Ethiopia with Emphasis on the Ogaden Basin and Adjacent Areas Ketsela Tadesse, University of Texas at El PasoFollow G. Randy Keller The combined effects of magmatism and stretching due to asthenosphere upwelling modifies the crustal structure of the Earth as seen in the Ethiopian rift and adjacent areas. The Ethiopian rift provides unique opportunities to understand the nature of rifted crust and the intensity of its modification by magmatic processes. I used geological and geophysical data to conduct an integrated study in and around the Ethiopian rift including the northern Kenyan rift and the northern part of the Kenyan dome. New gravity, controlled source seismic, and teleseismic data from the EAGLE (Ethiopia-Afar Geoscientific Lithospheric Experiment) were used as additional constraints in my analysis of the crustal structure of Ethiopian rift and adjacent plateaus. Application of a residual gravity anomaly filtering technique using upward continuation revealed various crustal features within the Ethiopian rift and the flanking plateau regions. Short wavelength high amplitude positive anomalies coincide with the local volcanic complexes and calderas. In addition low gravity anomalies are associated with areas of thicker sediments within the rift valley. Axial and cross rift gravity profiles were modeled in 2.5 dimensions constrained with seismic refraction and geologic data. The axial model connects the Kenyan dome through Turkana rift and Main Ethiopian rift (MER) up to the Afar triple junction and provides a new integrated picture of lithospheric structure along the rift for over 1000 km. This model indicates a thin crust (26 km) underlying the Afar region. The crust gradually thickens towards the MER where it is about 35-40 km thick. Towards the south the crust thins and is only 22 km thick when it reaches the Turkana area. The southern section of the axial model indicates that the crust is about 35 km thick beneath the central Kenyan rift. All these thickness values are in agreement with the EAGLE and Kenya Rift International Seismic Project (KRISP) and earlier refrresults and ties these results together to form a complete picture of the axial structure of the rift. The cross profiles, which are interlocked with the axial rift profile, indicate that thick (~45km) crust is present beneath a broad region of the western plateau. The EAGLE seismic results indicate that the part of the western plateau adjacent to the rift is thickened via underplating. The Bale Mountain region on the eastern rift flank has relatively thick (~40 km) crust, which is in agreement with receiver function results. In general, asthenospheric upwelling affects a wide zone near Afar and the southern Ethiopian rift, whereas the area of upwelling is narrower around the MER. The Abbay or Blue Nile basin was another target of my study. Integrated geophysical (seismic, remote sensing, and gravity) and geological data suggest that the sedimentary section of Abbay basin extends well to the east of the known extent of its sedimentary fill. Gravity modeling results suggest approximately 3 km of sub-volcanic sedimentary strata exist over a wide area. I also undertook an integrated analysis of the Ogaden basin that lies east of the rift valley and is associated with the break-up of Gondwanaland by Karroo rifting. Seismic reflection data were processed and interpreted and combined with gravity and magnetic data to study the evolution of the basin and its geometry. The existence of a tri-radial rift that connects to the Abbay basin is suggested by the isostatic residual gravity anomaly map produced in this study. This result provides new evidence for the relationship of the Ogaden and Abbay basins via a northwest-southeast trending Permo-Triassic rift system. The northeastern part of the Ogaden basin shows distinct gravity anomalies trending in a northeast-southwest direction that appear to be due to a series of grabens and horsts. 3D Euler deconvolution of gravity data and modeling results suggest a sedimentary thickness of about 5 km sedimentary strata in some of the grabens. Integrated gravity models in the southwest part of the Ogaden basin indicate a sediment thickness of 8 km. Interpretation of seismic reflection data indicates potential stratigraphic and structural traps for hydrocarbons in the Ogaden basin. Older strata such as the Karroo strata appear to pinch out towards the uplifted basement to the northwest. Fault structures are associated with the basement. Channels that appear as distinct features on 2D reflection seismic data may be developed in various places with hanging wall incision. Attribute analysis and interpretation suggest possible hydrocarbon bearing zones or at least porous formations and continuity of reflection horizons. In summary, this Dissertation presents a new integrated analysis of Permian and younger rifting in Ethiopia and northern Kenya. The basins related to Permo-Triassic were analyzed and connections across Ethiopia from the Ogaden basin to the western plateau are suggested. The seismic results of the EAGLE and KRISP project were tied together and extended beyond the rift valley regions and revealed large variations in structure along the rift system. Ketsela Tadesse Tadesse, Ketsela, "Integrated Geophysical Data Processing and Interpretation of Crustal Structure in Ethiopia with Emphasis on the Ogaden Basin and Adjacent Areas" (2007). Open Access Theses & Dissertations. 365. Geology Commons, Geophysics and Seismology Commons
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,537
\section{Introduction} Let $V \cong \mathbb{F}_q^v$ be a $v$-dimensional vector space over a finite field with $q$ elements $\mathbb{F}_q$. The set of all subspaces of $V$ forms a metric space with respect to the so-called subspace-distance $\dists(U,W)=\dim(U+W)-\dim(U \cap W)$ cf.~\cite[Lemma~1]{MR2451015}. A $(v,N,d;k)_q$ constant-dimension code (CDC) is a set of $k$-subspaces of $V$ of cardinality $N$ such that the subspace-distance of each pair of distinct elements, called codewords, is at least $d$. Coding in this metric space was motivated by K\"{o}tter and Kschischang in~\cite{MR2451015}. The main question of subspace coding in the constant-dimension case asks for the maximum cardinality $N$ of a $(v,N,d;k)_q$ code. This maximum cardinality is denoted as $A_q(v,d;k)$. The homepage \url{http://subspacecodes.uni-bayreuth.de/}, see also the manual~\cite{HKKW2016Tables}, lists the currently best known lower and upper bounds on $A_q(v,d;k)$ for $q \le 9$, $v \le 19$, all $d$, and all $k$. Many good lower bounds for CDCs arise from linkage type constructions; Section~\ref{sec:prevlink} provides an overview. This paper generalizes two already successful constructions, the improved linkage construction (Theorem~\ref{theo:linkageHK}) and the parallel linkage construction (Theorem~\ref{theo:linkageCHWX}), to the so-called generalized linkage construction (Theorem~\ref{theo:generalized_linkage}). According to the ranking in the homepage, the improved linkage construction is among the best known constructions in $\approx 50.7\%$ of the listed parameters while the parallel linkage construction is among the best known constructions in $\approx 6.3\%$ of the listed parameters. The generalized linkage construction is among the best known constructions in $\approx 52.5\%$ of the listed parameters. As these numbers change if new bounds are introduced in the database, especially since most linkage type constructions refer back to smaller CDCs as building blocks, we prove in Lemma~\ref{lem:comparison_linkageCHWK} and Lemma~\ref{lem:comparison_linkageHK} that the generalized linkage construction is strictly better for an infinite family of parameters than the parallel linkage construction and the improved linkage construction, respectively. The rest of the paper is organized as follows. In Section~\ref{sec:preliminaries}, we introduce the notation used, in particular $q$-binomial coefficients, rank-metric codes and their sizes, and bounds needed for the comparison of the linkage constructions. We need rank-metric codes having the additional property that each codeword has an upper bounded rank. Bounds for the cardinalities of these \emph{rank-restricted rank-metric codes} and special cases are determined in Section~\ref{sec:considerLambda}. Section~\ref{sec:prevlink} provides an overview over two families of linkage type constructions. Both families are generalized in a single construction in Section~\ref{sec:genlink}. For some parameters, the new construction is strictly better, as shown in Section~\ref{sec:comparison}. \section{Preliminaries}\label{sec:preliminaries} Throughout the paper, we will use the following notation and facts about $q$-binomial coefficients. For prime powers $q \ge 2$ and $0 \le n$ we use the $q$-numbers $[n]_q = (q^n-1)/(q-1) = \sum_{i=0}^{n-1} q^i$, the $q$-factorials $[n]_q! = \prod_{i=1}^{n}[i]_q$, and the $q$-binomial coefficients $\gaussmnum{v}{k}{q} = \frac{[v]_q!}{[k]_q![v-k]_q!} = \prod_{i=0}^{k-1} \frac{q^{v}-q^{i}}{q^{k}-q^{i}}$ for $0 \le k \le v$. An empty sum is defined to be $0$ and an empty product is $1$, so that $[0]_q=0$ and $[0]_q!=1$ in particular. Note that $[n]_q <q^n$. The set of $k$-subspaces in $\mathbb{F}_q^v$ is denoted as $\gaussmset{\mathbb{F}_q^v}{k}$ and its cardinality is $\gaussmnum{v}{k}{q}$. The sizes of $q$-binomial coefficients can be estimated with \begin{lemma}[{\cite[Lemma~4]{MR2451015}, cf.~\cite[Lemma~8]{ubtepub4049}}]\label{lem:qbinestim} For all prime powers $q \ge 2$ and $0<k<v$, we have \begin{align*} 1 < \gaussmnum{v}{k}{q} / q^{k(v-k)} < \left(\prod_{i=1}^{\infty} \left(1-q^{-i} \right) \right)^{-1} < 3.47 . \end{align*} \end{lemma} We use the following well known connection between subspaces and full-rank matrices. The reduced row echelon form of the matrix $A$ is denoted as $\RREF(A)$. Then, the bijection between subspaces and their canonical basis in reduced row echelon form, written as rows of a matrix, is \begin{align*} \tau : \gaussmset{\mathbb{F}_q^v}{k} \rightarrow \left\{A \in \mathbb{F}_q^{k \times v} \mid \rk A = k \land A = \RREF(A) \right\}, \end{align*} in particular, $U$ is the row-span of $\tau(U)$ for any subspace $U$. We omit the dependency of $\tau$ on $q,k,v$ as the context determines them and we extend the codomain of $\tau(\cdot)$ by $\tau^{-1}(A) = \tau^{-1}(\RREF(A))$ for any matrix $A$ of full row-rank. For a matrix $A$ in reduced row echelon form, $\pivot(A)$ is the binary vector with $\pivot(A)_i=1$ iff column $i$ is a pivot column in the matrix $A$. We extend the domain of $\pivot(\cdot)$ by $\pivot(B)=\pivot(\RREF(B))$ for any matrix $B$ and $\pivot(U)=\pivot(\tau(U))$ for any subspace $U$. The horizontal concatenation of matrices or vectors $A,B$ of compatible sizes and ambient fields is denoted as $A \mid B$. In addition to the subspace-distance \begin{align*} &\dists(U,W) \\=&\dim(U+W)-\dim(U \cap W) \\=&\dim(U)+\dim(W)-2\dim(U \cap W) \\=&2\dim(U+W)-\dim(U)-\dim(W) \\=&2\rk\sm{\tau(U)\\\tau(W)}-\dim(U)-\dim(W) \stepcounter{equation}\tag{\theequation}\label{math:dsrank} \end{align*} for two subspaces $U,W$ of a common vector space, we will also need the Hamming-distance $\disth(u,w)=\#\{i \mid u_i \ne w_i\}$ of two vectors $u,w$ in a common vector space, in particular the weight of $u$ defined as $\weight(u)=\disth(u,0)$, and the rank-distance $\distr(A,B)=\rk(A-B)$ of two matrices $A,B$ of compatible sizes and ambient fields. It is well known that the Hamming-distance of pivot vectors of subspaces lower bounds their subspace-distance. \begin{lemma}[{\cite[Lemma~2]{MR2589964}}]\label{lem:dhds} Let $U$ and $W$ be two subspaces of a common vector space, then \begin{align*} \disth(\pivot(U),\pivot(W)) \le \dists(U,W). \end{align*} \end{lemma} We will apply \begin{align}\label{math:dheq} \disth(u \mid u',v \mid v') = \disth(u,v)+\disth(u',v') \end{align} for any $q$-ary vectors of compatible lengths and the lower bound \begin{align}\label{math:dhlb} |\weight(u)-\weight(v)| \le \disth(u,v) \end{align} for two binary vectors $u,v$ of equal length, as any pivot vector is a binary vector. We will also make use of \begin{align*} \rk(X) \le &\rk(X \mid Y) \le \rk(X) + \rk(Y) \stepcounter{equation}\tag{\theequation}\label{math:rkineq} \\ &\rk(X + Y) \le \rk(X) + \rk(Y) \stepcounter{equation}\tag{\theequation}\label{math:subaddmat} \\ \rk(X) + \rk(Y) -n \le &\rk(X \!\cdot\! Y) \le \min\{\rk(X),\rk(Y)\} \stepcounter{equation}\tag{\theequation}\label{math:sylvesterandtrivmat} \end{align*} for any matrices $X,Y$ of compatible sizes and ambient fields, such that $n$ is the number of columns of $X$. A rank-metric code (RMC) is a subset of $\mathbb{F}_q^{a \times b}$ of cardinality $N$ such that the rank-distance of each pair of codewords is at least $d$. An RMC is called linear, if it is a subspace of $\mathbb{F}_q^{a \times b}$. These parameters are abbreviated as $(a \times b,N,d)_q$. If in addition the rank of each codeword is at most $u$, we augment the notation to $(a \times b,N,d;u)_q$ and refer to it as rank-restricted RMC (RRMC). The maximum size of an $(a \times b,N,d;u)_q$ RMC is denoted as $\Lambda(q,a,b,d,u)$. For a $(k \times n,N,d)_q$ RMC $\mathcal{R}$, the lifted RMC of $\mathcal{R}$ is defined as $\{\tau^{-1}(I \mid R) : R \in \mathcal{R} \}$. It is a $(k+n,N,2d;k)_q$ CDC. Delsarte~\cite{MR514618} and Gabidulin~\cite{MR791529} determined the maximum cardinality \begin{align*} M(q,a,b,d) = \left\lceil q^{\max\{a,b\}(\min\{a,b\}-d+1)} \right\rceil \end{align*} of RMCs for all parameters $q,a,b,d$, and $\min\{a,b\} \le u$ and gave constructions to build bound-achieving RMC codes, so-called maximum rank-distance (MRD) codes. The theory of Delsarte in~\cite{MR514618} allows to determine the rank-distribution in a linear MRD code. In his words, a linear MRD code in $\mathbb{F}_q^{a \times b}$ and minimum rank-distance $d$ is equivalent to an $(\min\{a,b\},\max\{a,b\},\min\{a,b\}-d+1,q)$-Singleton system and can be seen as a $(d-1)$-codesign of cardinality $q^{\max\{a,b\}(\min\{a,b\}-d+1)}$, which is a set of bilinear forms $X \subseteq \left\{f : \mathbb{F}_q^{\min\{a,b\}} \times \mathbb{F}_q^{\max\{a,b\}} \to \mathbb{F}_q \mid f \text{ bilinear form} \right\}$ such that $\rk(f-g)>d-1$ for all $f \ne g \in X$. \begin{theorem}[{\cite[Theorem~5.6]{MR514618}, cf.~\cite[Corollary~26]{de2015rank}}]\label{theo:rankdistributionMRD} The number of matrices with rank $r$ ($d \le r \le \min\{a,b\}$) in a linear MRD code in $\mathbb{F}_q^{a \times b}$ and minimum rank-distance $d$ is given by \begin{align*} &D(q,a,b,d,r) \\=&\gaussmnum{\min\{a,b\}}{r}{q} \cdot \sum_{i=0}^{r-d} (-1)^i q^{\binom{i}{2}} \gaussmnum{r}{i}{q} \left( q^{\max\{a,b\}(r-d+1-i)}-1 \right). \end{align*} \end{theorem} We abbreviate \begin{align*} \Delta(q,a,b,d,u) = 1+\sum_{i=d}^{\min\{u,a,b\}} D(q,a,b,d,i) \end{align*} which is the size of the largest subset of a linear MRD code in $\mathbb{F}_q^{a \times b}$ and minimum rank-distance $d$ such that the rank of each included matrix is at most $u$. We deliberately allow matrices with zero rows or zero columns and count sets containing only one such matrix with cardinality one and denote the all-zero matrix with $0$ and the identity matrix with $I$. Theorem~\ref{theo:rankdistributionMRD} and $\Delta(q,a,b,d,u)$ provide only a construction for $(a \times b,N,d;u)_q$ RMC. In fact, we have \begin{align*} \Delta(q,a,b,d,u) \le \Lambda(q,a,b,d,u) \le M(q,a,b,d) . \stepcounter{equation}\tag{\theequation}\label{math:ineqDeltaLambdaM} \end{align*} The number of matrices of a given rank in a finite vector space is well known. \begin{theorem}[{\cite{MR1580299},\cite[Theorem~2]{MR1533848}}]\label{theo:NumberMatricesRankRFiniteField} The number of matrices in $\mathbb{F}_q^{a \times b}$ of rank $r$ is \begin{align*} \prod_{i=0}^{r-1} \frac{(q^{a}-q^{i})(q^{b}-q^{i})}{q^{r}-q^{i}} = q^{\binom{r}{2}}(q-1)^r[r]_q!\gaussmnum{a}{r}{q}\gaussmnum{b}{r}{q} . \end{align*} \end{theorem} Due to the following lemma, we can without loss of generality restrict the parameters of a CDC to $2 \le d/2 \le k \le v/2$, see~\cite[Page~33f.]{ubtepub4049} or~\cite[Page~4822]{MR3988525} for an extensive discussion. \begin{lemma}[{\cite[Page~3582, Equation~4]{MR2451015}}]\label{lem:AqvdkEqAqvdvmk} For all $q$ and $2 \le d/2 \le k \le v$, we have \begin{align*} A_q(v,d;k) = A_q(v,d;v-k). \end{align*} \end{lemma} We will use the following lower bound by Cossidente and Pavese to compare the generalized linkage construction (Theorem~\ref{theo:generalized_linkage}) to the parallel linkage construction (Theorem~\ref{theo:linkageCHWX_trivialimprovement}) in Lemma~\ref{lem:comparison_linkageCHWK} and to the improved linkage construction (Theorem~\ref{theo:linkageHK}) in Lemma~\ref{lem:comparison_linkageHK}. \begin{theorem}[{\cite[Theorem~4.3]{MR3759908}}]\label{theo:lbAq844} \begin{align*} A_q(8,4;4) \ge q^{12}+q^2(q^2+1)^2(q^2+q+1)+1. \end{align*} \end{theorem} For CDCs having $d=2k$, i.e., the minimum subspace-distance is as large as possible, and the dimension of the ambient vector space is a multiple of the dimension of the codewords, Beutelspacher showed that there are bound-achieving codes. This setting is often referred to as \emph{spread}. \begin{theorem}[{\cite{MR404010}}]\label{theo:spread} For all $q$, $1 \le k$, and $k \mid v$, we have \begin{align*} A_q(v,2k;k)=[v]_q/[k]_q . \end{align*} \end{theorem} An easy to use yet strong upper bound for CDCs is the so-called \emph{Anticode bound}. \begin{theorem}[{\cite[Theorem~5.2]{MR1984479},~\cite[Theorem~1]{MR2810308}}]\label{theo:anticode} For all $q$ and $2 \le d/2 \le k \le v$, we have \begin{align*} A_q(v,d;k) \le \gaussmnum{v}{k-d/2+1}{q} / \gaussmnum{k}{k-d/2+1}{q} . \end{align*} \end{theorem} \section{Bounds for $\Lambda$}\label{sec:considerLambda} Here, we adapt the proof of the upper bound of the size of an MRD code to obtain an upper bound on $\Lambda(q,a,b,d,u)$. This particular upper bound is the Singleton bound applied to the metric space $(\mathbb{F}_q^{a \times b},\distr)$ via the puncturing operation $g : \mathbb{F}_q^{a \times b} \to \mathbb{F}_q^{a \times (b-1)}$ mapping a matrix to its first $b-1$ columns, i.e., it cuts the last column off. \begin{theorem}\label{theo:upperboundLambda} Let $2 \le d \le \min\{a,b\}$ and $0 \le u$ be integers. Then, we have $\Lambda(q,a,b,d,u) \le \Lambda(q,a,b-1,d-1,u)$ and $\Lambda(q,a,b,d,u) \le$ \begin{align*} \sum_{r=0}^{\min\{u,\min\{a,b\}-d+1\}} \!\!\!\!\! q^{\binom{r}{2}}(q\!-\!1)^r[r]_q!\!\gaussmnum{\min\{a,b\}-d+1}{r}{q}\!\!\gaussmnum{\max\{a,b\}}{r}{q} \!. \end{align*} \end{theorem} \begin{proof} Let $b \le a$ without loss of generality, otherwise transpose. Note that, for matrices $A$ and $B$ of compatible size and ambient field, $\rk(A)-\rk(g(A)) \in \{0,1\}$ (cf. Inequality~\eqref{math:rkineq}), which implies $\distr(A,B)-\distr(g(A),g(B)) \in \{0,1\}$, hence $\distr(A,B)-1 \le \distr(g(A),g(B))$ and $\rk(g(A)) \le \rk(A)$. So the puncturing operation applied to all elements of an $(a \times b,N,d;u)_q$ RMC yields an $(a \times (b-1),N',d-1;u)_q$ RMC and this is an injective map if $2 \le d$, i.e., $N'=N$. Applying the puncturing operation $d-1$ times yields an $(a \times (b-d+1),N,1;u)_q$ RMC $\mathcal{R}$. We have \begin{align*} \mathcal{R} \subseteq \left\{ A \in \mathbb{F}_q^{a \times (b-d+1)} \mid \rk A \le u \right\} \end{align*} and consequently \begin{align*} \Lambda(q,a,b,d,u) \le \#\left\{ A \in \mathbb{F}_q^{a \times (b-d+1)} \mid \rk A \le u \right\} . \end{align*} Then, Theorem~\ref{theo:NumberMatricesRankRFiniteField} allows to determine the cardinality of the right hand side and to complete the proof. \end{proof} In addition to the recursion provided in Theorem~\ref{theo:upperboundLambda}, i.e., $\Lambda(q,a,b+1,d+1,u) \le \Lambda(q,a,b,d,u)$, a similar argument shows $\Lambda(q,a+1,b,d+1,u) \le \Lambda(q,a,b,d,u)$ and trivially, we also have $\Lambda(q,a,b,d,u-1) \le \Lambda(q,a,b,d,u)$. The bound in Theorem~\ref{theo:upperboundLambda} is equivalent to $\Lambda(q,a,b,d,u) \le M(q,a,b,d)$, cf. Inequality~\eqref{math:ineqDeltaLambdaM}, iff $\min\{a,b\} < d+u$ and stronger iff $d+u \le \min\{a,b\}$. $\Lambda(q,a,b,d,u)$ is precisely the clique number of a graph with vertex set $\left\{A \in \mathbb{F}_q^{a \times b} \mid \rk A \le u \right\}$ and two vertices $A$ and $B$ share an edge iff $\distr(A,B) \ge d$. The number of vertices can be computed by Theorem~\ref{theo:NumberMatricesRankRFiniteField}. Using \texttt{GAP}~\cite{GAP4} and \texttt{Cliquer}~\cite{niskanen2003cliquer} we compute \begin{align*} &\Lambda(2,2,2,2,1)=3, &&\Lambda(2,3,2,2,1)=3, \\ &\Lambda(2,3,3,2,1)=7, &&\Lambda(2,3,3,2,2)=50, \\ &\Lambda(2,4,4,2,1)=15, &&\Lambda(2,4,4,4,2)=5, \\ &\Lambda(3,2,2,2,1)=4, &&\Lambda(3,3,3,2,1)=13, \text{ and}\\ &\Lambda(3,4,2,2,1)=4. \end{align*} The bounds of Inequality~\eqref{math:ineqDeltaLambdaM} are \begin{align*} 1 \le &\Lambda(q,3,3,2,1) \le q^6, \\ q(q^4+q^3+q^2-q-1) \le &\Lambda(q,3,3,2,2) \le q^6, \text{ and} \\ q(q^7\!+\!q^6\!+\!2q^5\!+\!q^4\!-\!q^2\!-\!2q\!-\!1) \le &\Lambda(q,4,4,2,2) \le q^{12}. \end{align*} Theorem~\ref{theo:upperboundLambda} implies \begin{align*} &\Lambda(q,3,3,2,1) \le q(q^3+q^2-1), \\ &\Lambda(q,3,3,2,2) \le q^6, \text{ and} \\ &\Lambda(q,4,4,2,2) \le q^3(q^7+q^6+q^5-q^4-q^3-q^2+1). \end{align*} We can prove basic structure results for RRMCs. \begin{lemma}\label{lem:structureRRMC} Let $\mathcal{C}$ be an $(a \times b,N,d;u)_q$ RMC with $2 \le N$, then $d-u \le \rk(A) \le u$ for each matrix $A$ in $\mathcal{C}$, there is at most one matrix $M$ in $\mathcal{C}$ with $\rk(M) < d/2$, and $d \le \distr(X,Y) \le 2u$ for all $X \ne Y \in \mathcal{C}$. \end{lemma} \begin{proof} By Inequality~\eqref{math:subaddmat}, we have $d \le \rk(A-B) \le \rk(A)+\rk(-B) \le \rk(A)+u \Rightarrow d-u \le \rk(A)$ for $A \ne B \in \mathcal{C}$. Assume there are two distinct matrices $M$ and $M'$ in $\mathcal{C}$ with $\rk(M) < d/2$ and $\rk(M') < d/2$, then again by Inequality~\eqref{math:subaddmat}, we have $d \le \rk(M-M') \le \rk(M)+\rk(-M') <d/2+d/2$, a contradiction. Inequality~\eqref{math:subaddmat} shows $\rk(X-Y) \le \rk(X)+\rk(-Y) \le 2u$. \end{proof} The next lemma is needed in Theorem~\ref{theo:lambdaequalities} to complete the case of ``$d=2u$''. \begin{lemma}\label{lem:rkouterproductgeneral} Let $A_1,B_1 \in \mathbb{F}_q^{a \times u}$ and $A_2,B_2 \in \mathbb{F}_q^{u \times b}$. Then $\rk(A_1A_2-B_1B_2) = 2u$ iff $\rk\sm{A_1 & B_1}=\rk\sm{A_2 \\ B_2}=2u$. \end{lemma} \begin{proof} We have $A_1A_2-B_1B_2 = \sm{A_1 & B_1}\sm{A_2 \\ -B_2} = M$. Inequality~\eqref{math:subaddmat} shows $\rk(M) \le \rk(A_1A_2)+\rk(B_1B_2) \le u+u$ and Inequality~\eqref{math:sylvesterandtrivmat} implies \begin{align*} &\rk\sm{A_1 & B_1}+\rk\sm{A_2 \\ -B_2}-2u \\\le& \rk(M) \le \min\left\{\rk\sm{A_1 & B_1},\rk\sm{A_2 \\ -B_2}\right\}. \end{align*} If $\rk\sm{A_1 & B_1}=\rk\sm{A_2 \\ B_2}=2u$, then this shows $2u \le \rk(M)$. If $\rk\sm{A_1 & B_1}<2u$ or $\rk\sm{A_2 \\ B_2}<2u$, then this shows $\rk(M) < 2u$. \end{proof} We can settle the size of $\Lambda(q,a,b,d,u)$ for many parameters, in particular for all parameters with $u=1$ or all parameters with $d=2u$. \begin{theorem}\label{theo:lambdaequalities} \begin{enumerate} \item If $2u < d$ or if $\min\{a,b\} < d$, then $\Lambda(q,a,b,d,u) = 1$. \item If $\min\{a,b\} \le u$, then $\Lambda(q,a,b,d,u) = M(q,a,b,d)$. \item $\Lambda(q,a,b,1,u) = \sum_{r=0}^{\min\{u,a,b\}} q^{\binom{r}{2}}(q-1)^r[r]_q!\gaussmnum{a}{r}{q}\gaussmnum{b}{r}{q}$. \item $\Lambda(q,a,b,2u,u) = A_q(\min\{a,b\},2u;u)$. \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate} \item Lemma~\ref{lem:structureRRMC} implies the first case with the fact that $\mathcal{C}=\{0\}$ is an $(a \times b,N,d;u)_q$ RMC of maximum cardinality. If $\min\{a,b\} < d$, then $d \le \distr(A,B) = \rk(A-B) \le \min\{a,b\}$ implies also that $\mathcal{C}=\{0\}$ is of maximum cardinality. \item If $\min\{a,b\} \le u$, then $u$ imposes no restriction and any $(a \times b,N,d;u)_q$ RMC, and the latter is an $(a \times b,N,d)_q$ RMC. \item If $d=1$, then the unique maximum cardinality code consists of all matrices of rank at most $u$, so that Theorem~\ref{theo:NumberMatricesRankRFiniteField} completes the proof. \item The statement is true for $\min\{a,b\} < 2u$ by (1), so we assume $2u \le \min\{a,b\}$. Let $\mathcal{C}$ be an $(a \times b,N,2u;u)_q$ RMC of size at least two. Lemma~\ref{lem:structureRRMC} implies that any matrix in $\mathcal{C}$ has rank $u$ and $\distr(A,B)=2u$ for $A \ne B \in \mathcal{C}$. Any matrix of rank $u$ in $\mathcal{C}$ is a product $AB$ for $A \in \mathbb{F}_q^{a \times u}$ and $B \in \mathbb{F}_q^{u \times b}$, both of full rank $u$. Hence, by Lemma~\ref{lem:rkouterproductgeneral}, we have $\distr(A_1B_1,A_2B_2)=2u$ iff $\rk\sm{A_1 & B_1}=\rk\sm{A_2 \\ B_2}=2u$. Let wlog. $b \le a$, then choose $Y \subseteq \mathbb{F}_q^{u \times b}$ such that $Y$ consists of full rank matrices such that $\rk\sm{B_1 \\ B_2}=2u$ for $B_1 \ne B_2 \in Y$, i.e., $\tau^{-1}(Y)$ is a $(b,\#Y,2u;u)_q$ CDC and the subspace distance is precisely $2u \le \dists(\tau^{-1}(B_1),\tau^{-1}(B_2)) = 2\left(\rk\sm{B_1\\B_2}-u\right) \Leftrightarrow 2u \le \rk\sm{B_1\\B_2}$ by Equation~\eqref{math:dsrank}. Hence, the maximum cardinality of $Y$ is $A_q(b,2u;u)$. Choose $X \subseteq \mathbb{F}_q^{a \times u}$ such that $X$ consists of full rank matrices such that $\rk\sm{A_1 & A_2}=2u$ for $A_1 \ne A_2 \in X$, as before $\tau^{-1}(X^T)$ is an $(a,\#X,2u;u)_q$ CDC, where $X^T=\{A^T \mid A \in X\}$. We choose $X$ of size $A_q(b,2u;u) \le A_q(a,2u;u)$. Each bijection $f: X \to Y$ gives then an $(a \times b,N,2u;u)_q$ RMC of maximum cardinality $A_q(b,2u;u)$, i.e., $\{xf(x) \mid x \in X\}$. \end{enumerate} \end{proof} Note that the proof of (4) in Theorem~\ref{theo:lambdaequalities} shows first that we have a constant-rank RMC in the case of $d=2u$, so that e.g. \cite[Theorem~2]{MR2798987} could also complete the proof. Theorem~\ref{theo:lambdaequalities} shows that Inequality\eqref{math:ineqDeltaLambdaM} and Theorem~\ref{theo:upperboundLambda} can be arbitrarily bad, in fact, we have \begin{align*} &\Delta(q,a,b,2,1) = 1 \in \Theta(1), \\ &\Lambda(q,a,b,2,1) = [\min\{a,b\}]_q \in \Theta(q^{\min\{a,b\}-1}), \\ &\Lambda(q,a,b,2,1) \le 1+(q-1)[\min\{a,b\}-1]_q[\max\{a,b\}]_q \\ &\quad\quad\quad\quad\quad\quad\,\,\,\in \Theta(q^{a+b-2}), \text{ and} \\ &M(q,a,b,2) = q^{\max\{a,b\}(\min\{a,b\}-1)} \in \Theta(q^{ab-\max\{a,b\}}), \end{align*} for $2 \le \min\{a,b\}$, using the Landau-$\Theta$ and Lemma~\ref{lem:qbinestim}. \begin{corollary}\label{cor:lowerboundlambda} For $1 \le d$ and $0 \le i \le 2u-d$, we have \begin{align*} \Lambda(q,a,b,d,u) &\ge \Lambda(q,a+i,b+2u-d-i,2u,u) \\&= A_q(\min\{a+i,b+2u-d-i\},2u;u) \end{align*} which yields the strongest bound for \begin{align*} i=\max\{0,\min\{2u-d,\lfloor (b-a-d)/2 \rfloor +u\}\}. \end{align*} \end{corollary} \begin{proof} Let $\mathcal{C}$ be an $((a+i) \times (b+2u-d-i),N,2u;u)_q$ RMC. We apply the puncturing argument of Theorem~\ref{theo:upperboundLambda} $i$ times to the rows and $2u-d-i$ to the columns of any matrix in $\mathcal{C}$ to obtain an $(a \times b,N,d;u)_q$ RMC $\mathcal{C}'$. Choosing $\mathcal{C}$ of maximum size shows the first inequality. Next, (4) in Theorem~\ref{theo:lambdaequalities} shows the equality. The optimal choice of $i$ follows by $A_q(v,d;k) \le A_q(v+1,d;k)$ and $a+i=b+2u-d-i \Leftrightarrow i=(b-a-d)/2+u$. \end{proof} Corollary~\ref{cor:lowerboundlambda} is sometimes stronger than Inequality~\eqref{math:ineqDeltaLambdaM}, e.g. $\Lambda(q,5,5,3,2) \ge 1$ by Inequality~\eqref{math:ineqDeltaLambdaM} but $\Lambda(q,5,5,3,2) \ge A_q(5,4;2) = q^3+1$ by Corollary~\ref{cor:lowerboundlambda}, using $i=0$ and the equality follows from~\cite{MR404010}. We present another lower bound of $\Lambda(q,a,b,d,u)$ involving CDCs and need therefore the following lemma. \begin{lemma}\label{lem:generallowerboundofdrbyds} Let $A_1,B_1 \in \mathbb{F}_q^{a \times u}$ and $A_2,B_2 \in \mathbb{F}_q^{u \times b}$ with $\rk(A_1)=\rk(A_2)=\rk(B_1)=\rk(B_2)=u$. Then \begin{align*} &\dists(\tau^{-1}(A_1^T),\tau^{-1}(B_1^T))+\dists(\tau^{-1}(A_2),\tau^{-1}(B_2)) \\\le&2\distr(A_1A_2,B_1B_2). \end{align*} \end{lemma} \begin{proof} Using Equation~\eqref{math:dsrank}, we have \begin{align*} &d_1 =\dists(\tau^{-1}(A_1^T),\tau^{-1}(B_1^T)) =2\left(\rk\sm{A_1^T\\B_1^T}-u\right) \\=&2\left(\rk\sm{A_1 & B_1}-u\right) \Leftrightarrow \rk\sm{A_1 & B_1}=d_1/2+u \end{align*} and \begin{align*} &d_2 =\dists(\tau^{-1}(A_2),\tau^{-1}(B_2)) =2\left(\rk\sm{A_2\\B_2}-u\right) \\=&2\left(\rk\sm{A_2\\-B_2}-u\right) \Leftrightarrow \rk\sm{A_2\\-B_2}=d_2/2+u. \end{align*} Then Inequality~\eqref{math:sylvesterandtrivmat} and \begin{align*} &2\distr(A_1A_2,B_1B_2) =2\rk\left(\sm{A_1 & B_1}\sm{A_2 \\ -B_2}\right) \\\ge&2\left(\rk\left(\sm{A_1 & B_1}\right)+\rk\left(\sm{A_2 \\ -B_2}\right)-2u\right) \\=&2\left(d_1/2+u+d_2/2+u-2u\right) =d_1+d_2 \end{align*} complete the proof. \end{proof} \begin{theorem}\label{theo:LambdaLB} Let $2d \le d_1+d_2$, then $\Lambda(q,a,b,d,u) \ge \min\{A_q(a,d_1;u),A_q(b,d_2;u)\}$. \end{theorem} \begin{proof} Choose $\mathcal{Y}$ as a $(b,N,d_1;u)_q$ CDC and $\mathcal{X}$ as an $(a,N,d_2;u)_q$ CDC with $2d \le d_1+d_2$. Let $Y \subseteq \mathbb{F}_q^{u \times b}$ such that $Y$ consists of full rank matrices and for each $U \in \mathcal{Y}$ there is exactly one $y \in Y$ with $\tau^{-1}(y)=U$. Let $X \subseteq \mathbb{F}_q^{a \times u}$ such that $X$ consists of full rank matrices and for each $W \in \mathcal{X}$ there is exactly one $x \in X$ with $\tau^{-1}(x^T)=W$. In particular, $\#Y=\#X=N$. Then, each bijection $f: X \to Y$ gives then an $(a \times b,N,d;u)_q$ RMC of cardinality $N$, i.e., $\mathcal{C}=\{xf(x) \mid x \in X\}$, because for $A_1A_2 \ne B_1B_2 \in \mathcal{C}$ such that $A_1,B_1 \in \mathcal{X}$ and $A_2,B_2 \in \mathcal{Y}$, Lemma~\ref{lem:generallowerboundofdrbyds} implies \begin{align*} &2d \le d_1+d_2 \le \dists(\tau^{-1}(A_1^T),\tau^{-1}(B_1^T)) \\+&\dists(\tau^{-1}(A_2),\tau^{-1}(B_2)) \le 2\distr(A_1A_2,B_1B_2) \end{align*} and each matrix $xf(x) \in \mathcal{C}$ has exactly the rank $u$. \end{proof} Since we are actually constructing only a subset of an RRMC which is a constant-rank RMC in Theorem~\ref{theo:LambdaLB}, \cite[Proposition~3]{MR2798987} could also be applied. Note, that Theorem~\ref{theo:LambdaLB} implies the lower bound of the equality in~(4) of Theorem~\ref{theo:lambdaequalities}, i.e., if $d=2u=d_1=d_2$, then \begin{align*} &\Lambda(q,a,b,2u,u) \\\ge& \min\{A_q(a,2u;u),A_q(b,2u;u)\} = A_q(\min\{a,b\},2u;u). \end{align*} \section{Previous linkage constructions}\label{sec:prevlink} All constructions in this section but Theorem~\ref{theo:linkageK} can be proved by Lemma~\ref{lem:main} which then implies supersets of parameters as the original proofs. Note, that we also allow $\{0\}$ as RMC. The original linkage construction was independently discovered by Gluesing-Luerssen and Troha in~\cite{MR3543532} and by Silberstein and Horlemann-Trautmann in~\cite{MR3367813}. Special cases were already used by Gluesing-Luerssen, Morrison, and Troha in~\cite[Theorem~5.1]{MR3348437} for cyclic orbit codes and by Etzion and Vardy in~\cite[Theorem~11]{MR2810308} for spreads. \begin{theorem}[{\cite[Theorem~2.3]{MR3543532} and~\cite[Theorem~37 and Corollary~39]{MR3367813}}]\label{theo:linkageGLTSHT} Let $d/2,k,r,s$ be integers with $2 \le d/2 \le k$, $k \le r$, and $k \le s$. Let $\mathcal{A}$ be an $(r,\#\mathcal{A},d;k)_q$ CDC and $\mathcal{B}$ be an $(s,\#\mathcal{B},d;k)_q$ CDC. Let $\mathcal{M}$ be a $(k \times s,\#\mathcal{M},d/2)_q$ RMC. Then \begin{align*} &\{\tau^{-1}(\tau(A) \mid M) : A \in \mathcal{A}, M \in \mathcal{M}\} \\\cup&\{\tau^{-1}(0 \mid \tau(B)) : B \in \mathcal{B}\} \end{align*} is an $(r+s,\#\mathcal{A} \cdot \#\mathcal{M} + \#\mathcal{B},d;k)_q$ CDC. In particular, \begin{align*} A_q(r+s,d;k) \ge A_q(r,d;k) \cdot M(q,k,s,d/2) + A_q(s,d;k). \end{align*} \end{theorem} In~\cite{MR3705116}, Heinlein and Kurz combined Theorem~\ref{theo:linkageGLTSHT} with Lemma~\ref{lem:dhds} to get the following so-called \emph{improved linkage construction}. \begin{theorem}[{\cite[Theorem~18]{MR3705116}, cf.~\cite[Theorem~136]{ubtepub4049}}]\label{theo:linkageHK} Let $d/2,k,r,s,t$ be integers with $2 \le d/2 \le k \le (r+s)/2$, $k \le r$, $k \le s+t$, and $0 \le t \le k-d/2$. Let $\mathcal{A}$ be an $(r,\#\mathcal{A},d;k)_q$ CDC and $\mathcal{B}$ be an $(s+t,\#\mathcal{B},d;k)_q$ CDC. Let $\mathcal{M}$ be a $(k \times s,\#\mathcal{M},d/2)_q$ RMC. Then \begin{align*} &\{\tau^{-1}(\tau(A) \mid M) : A \in \mathcal{A}, M \in \mathcal{M}\} \\\cup&\{\tau^{-1}(0 \mid \tau(B)) : B \in \mathcal{B}\} \end{align*} is an $(r+s,\#\mathcal{A} \cdot \#\mathcal{M} + \#\mathcal{B},d;k)_q$ CDC. In particular, \begin{align*} &A_q(r+s,d;k) \\\ge& A_q(r,d;k) \cdot M(q,k,s,d/2) + A_q(s+k-d/2,d;k). \end{align*} \end{theorem} Theorem~\ref{theo:linkageHK} was again improved by a generalized extension to any subcode of the form $\{\tau^{-1}(\tau(A) \mid M) : A \in \mathcal{A}, M \in \mathcal{M}\}$ by Kurz in~\cite{kurz2019note}. Using the notation in~\cite{kurz2019note}, for $0 \le w \le v$ let $B_q(v,w,d;k)$ be the maximum cardinality of a $(v,\#\mathcal{B},d;k)_q$ CDC $\mathcal{B}$ such that there is a $w$-subspace $W$ with $\dim(W \cap B) \ge d/2$ for each $B \in \mathcal{B}$. Unfortunately, the quantity $B_q(v,w,d;k)$ is not known in general as they generalize the numbers $A_q(v,d;k)$ which are not well understood either, but~\cite{kurz2019note} contains a lower bound: \begin{theorem}[{\cite[Theorem~3.2, Proposition~4.1, and Theorem~4.2]{kurz2019note}}]\label{theo:linkageK} Let $d/2,k,r,s$ be integers with $2 \le d/2 \le k \le (r+s)/2$, $k \le r$, and $d/2 \le s$. \begin{enumerate} \item Let $\mathcal{A}$ be an $(r,\#\mathcal{A},d;k)_q$ CDC. Let $\mathcal{M}$ be a $(k \times s,\#\mathcal{M},d/2)_q$ RMC. Then the $s$-subspace $W=\tau^{-1}(0 \mid I)$ intersects each codeword in $\{\tau^{-1}(\tau(A) \mid M) : A \in \mathcal{A}, M \in \mathcal{M}\}$ trivial. Let $\mathcal{B}$ be an $(r+s,\#\mathcal{B},d;k)_q$ CDC such that $\dim(W \cap B) \ge d/2$ for each $B \in \mathcal{B}$. Then \begin{align*} \{\tau^{-1}(\tau(A) \mid M) : A \in \mathcal{A}, M \in \mathcal{M}\} \cup \mathcal{B} \end{align*} is an $(r+s,\#\mathcal{A} \cdot \#\mathcal{M} + \#\mathcal{B},d;k)_q$ CDC. \item \begin{align*} &A_q(r+s,d;k) \\\ge& A_q(r,d;k) \cdot M(q,k,s,d/2) + B_q(r+s,s,d;k) \end{align*} \item For $4 \le k+1 \le w+2 \le v$: \begin{align*} B_q(v,w,2k-2;k) \ge A_q(w,2k-4;k-1) \end{align*} \item For $3 \le k \le s+1$: \begin{align*} &A_q(r+s,2k-2;k) \\\ge& A_q(r,2k\!-\!2;k) \!\cdot\! M(q,k,s,k\!-\!1) \!+\! A_q(s,2k\!-\!4;k\!-\!1) \end{align*} \end{enumerate} \end{theorem} Xu and Chen developed in~\cite{MR3849557} a different direction as they incorporate matrices with lower \emph{and upper} bounded ranks in a construction of CDCs. \begin{theorem}[{\cite[Theorem~3]{MR3849557}}]\label{theo:linkageXC} Let $d/2,k$ be integers with $2 \le d/2 \le k$. Let $\mathcal{M}$ be a $(k \times k,\#\mathcal{M},d/2)_q$ RMC and let $\mathcal{R}$ be a $(k \times k,\#\mathcal{R},d/2;k-d/2)_q$ RMC. Then \begin{align*} \{\tau^{-1}(I \mid M) : M \in \mathcal{M}\} \cup \{\tau^{-1}(R \mid I) : R \in \mathcal{R}\} \end{align*} is a $(2k,\#\mathcal{M} + \#\mathcal{R},d;k)_q$ CDC. In particular, \begin{align*} A_q(2k,d;k) \ge M(q,k,k,d/2) + \Lambda(q,k,k,d/2,k-d/2) \end{align*} \end{theorem} Finally, Theorem~\ref{theo:linkageXC} was improved by Chen, He, Weng, and Xu in~\cite{chen2019new} by allowing the dimensions of the ambient spaces to vary. This is the so-called \emph{parallel linkage construction}. \begin{theorem}[{\cite[Theorem~3.1]{chen2019new}}]\label{theo:linkageCHWX} Let $d/2,k,n$ be integers with $2 \le d/2 \le k$ and $0 \le n$. Let $\mathcal{A}$ be a $(k+n,\#\mathcal{A},d;k)_q$ CDC such that each $A \in \mathcal{A}$ is of the form $\tau(A)=(I \mid A')$, i.e., it is a lifted RMC, and $\mathcal{B}$ be an $(n+k,\#\mathcal{B},d;k)_q$ CDC. Let $\mathcal{M}$ be a $(k \times k,\#\mathcal{M},d/2)_q$ RMC and $\mathcal{R}$ be a $(k \times k,\#\mathcal{R},d/2;k-d/2)_q$ RMC. Then \begin{align*} &\{\tau^{-1}(\tau(A) \mid M) : A \in \mathcal{A}, M \in \mathcal{M}\} \\\cup&\{\tau^{-1}(R \mid \tau(B)) : R \in \mathcal{R}, B \in \mathcal{B}\} \end{align*} is an $(n+2k,\#\mathcal{A} \cdot \#\mathcal{M} + \#\mathcal{R} \cdot \#\mathcal{B},d;k)_q$ CDC. In particular, \begin{align*} &A_q(n+2k,d;k) \\\ge& M(q,k,n,d/2) \cdot M(q,k,k,d/2) \\+& A_q(n+k,d;k) \cdot \Lambda(q,k,k,d/2,k-d/2). \end{align*} \end{theorem} Of course, the concatenation of an RMC with an RMC is again an RMC. To be more precise, if $\mathcal{M}$ is an $(a \times b,\#\mathcal{M},d)_q$ RMC and $\mathcal{N}$ is an $(a \times c,\#\mathcal{N},d)_q$ RMC, then $\{(M \mid N) : M \in \mathcal{M}, N \in \mathcal{N}\}$ is an $(a \times (b+c),\#\mathcal{M}\cdot\#\mathcal{N},d)_q$ RMC, since Inequality~\eqref{math:rkineq} implies \begin{align*} &\rk((M \mid N) - (M' \mid N')) = \rk(M-M' \mid N-N') \\\ge& \max\{\rk(M-M'),\rk(N-N')\} \ge d \end{align*} for $M,M' \in \mathcal{M}$ and $N,N' \in \mathcal{N}$ with $(M \mid N) \ne (M' \mid N')$. Hence, we can improve Theorem~\ref{theo:linkageCHWX} to the following construction. \begin{theorem}\label{theo:linkageCHWX_trivialimprovement} Let $d/2,k,s$ be integers with $2 \le d/2 \le k$ and $k \le s$. Let $\mathcal{A}$ be a $(k+s,\#\mathcal{A},d;k)_q$ CDC such that each $A \in \mathcal{A}$ is of the form $\tau(A)=(I \mid A')$, i.e., it is a lifted RMC, and $\mathcal{B}$ be an $(s,\#\mathcal{B},d;k)_q$ CDC. Let $\mathcal{R}$ be a $(k \times k,\#\mathcal{R},d/2;k-d/2)_q$ RMC. Then \begin{align*} \mathcal{A} \cup \{\tau^{-1}(R \mid \tau(B)) : R \in \mathcal{R}, B \in \mathcal{B}\} \end{align*} is a $(k+s,\#\mathcal{A} + \#\mathcal{R} \cdot \#\mathcal{B},d;k)_q$ CDC. In particular, \begin{align*} &A_q(k+s,d;k) \\\ge& M(q,k,s,d/2) + A_q(s,d;k) \cdot \Lambda(q,k,k,d/2,k-d/2). \end{align*} \end{theorem} \begin{lemma} The bound in Theorem~\ref{theo:linkageCHWX_trivialimprovement} is equivalent to the bound in Theorem~\ref{theo:linkageCHWX} iff $k \le n$ or $n=0$ and stronger iff $0 < n < k$. \end{lemma} \begin{proof} Note that $s=n+k$ and $0 \le n \Leftrightarrow k \le s$. We have \begin{align*} & \text{bound in Theorem~\ref{theo:linkageCHWX_trivialimprovement} } \ge \text{ bound in Theorem~\ref{theo:linkageCHWX}} \\ \Leftrightarrow & M(q,k,s,d/2) \ge M(q,k,n,d/2) \cdot M(q,k,k,d/2) \\ \Leftrightarrow & q^{s(k-d/2+1)} \ge \left\lceil q^{\max\{n,k\}(\min\{n,k\}-d/2+1)} \right\rceil \cdot q^{k(k-d/2+1)} \\ \Leftrightarrow & q^{n(k-d/2+1)} \ge \left\lceil q^{\max\{n,k\}(\min\{n,k\}-d/2+1)} \right\rceil. \end{align*} If $k \le n$, the exponent $\max\{n,k\}(\min\{n,k\}-d/2+1) = n(k-d/2+1)$ is at least one and both sides of the inequality coincide. If $0 \le n < \min\{d/2,k\}$, the exponent $\max\{n,k\} (\min\{n,k\}-d/2+1) = k(n-d/2+1)$ is at most zero, so the right hand side of the inequality is one, while the left hand side is one iff $n=0$ and else greater than one. If $d/2 \le n < k$, the exponent $\max\{n,k\}(\min\{n,k\}-d/2+1) = k(n-d/2+1)$ is at least $k$, so we continue: \begin{align*} \Leftrightarrow & q^{n(k-d/2+1)} \ge q^{k(n-d/2+1)} \\ \Leftrightarrow & n(k-d/2+1) \ge k(n-d/2+1) \\ \Leftrightarrow & n(-d/2+1) \ge k(-d/2+1) \\ \Leftrightarrow & (n-k)(-d/2+1) \ge 0. \end{align*} Due to $n < k$ and $2 \le d/2$, the left hand side is at least one, proving the statement. \end{proof} If $\mathcal{A}$ in Theorem~\ref{theo:linkageCHWX_trivialimprovement}, $\mathcal{A}$ and $\mathcal{M}$ in Theorem~\ref{theo:linkageCHWX} or $\mathcal{M}$ in Theorem~\ref{theo:linkageXC} are chosen to be of maximum size, respectively, then they give rise to so-called lifted maximum rank-distance (LMRD) codes and any superset of this particular subcode is upper bounded by more elaborate bounds first proved by Etzion and Silberstein in~\cite[Theorems~10 and~11]{MR3015712} and improved by the author in~\cite[Theorem~1]{MR3988525}, cf.~\cite[Proposition~99]{ubtepub4049}. To overcome this difficulty, we do not restrict this part of the construction to lifted maximum rank-distance codes in Theorem~\ref{theo:generalized_linkage}. \section{The generalized linkage construction}\label{sec:genlink} \begin{lemma}\label{lem:main} Let $k,r,s$ be positive integers and $A,C \in \mathbb{F}_q^{k \times r}$ and $B,D \in \mathbb{F}_q^{k \times s}$ be matrices such that $\rk(A \mid B) = \rk(C \mid D) = k$. If \begin{enumerate} \item $\rk A = \rk C = k$ and $d \le \dists(\tau^{-1}(A),\tau^{-1}(C))$, \item $A = C$ and $\rk A = k$ and $d/2 \le \distr(B,D)$ or \item $d/2 \le |\rk A - \rk C|$, \end{enumerate} then \begin{align*} d \le& \dists(\tau^{-1}(A \mid B),\tau^{-1}(C \mid D)) \\=& \dists(\tau^{-1}(B \mid A),\tau^{-1}(D \mid C)) . \end{align*} \end{lemma} \begin{proof} For two subspaces $U$ and $W$ of dimension $k$ in a common vector space, we have with Equation~\eqref{math:dsrank} \begin{align*} d \le \dists(U,W) \Leftrightarrow \rk\sm{\tau(U)\\\tau(W)} \ge k+d/2. \end{align*} Since $\rk\sm{M&N\\O&P}=\rk\sm{N&M\\P&O}=\rk\sm{\RREF(N&M)\\\RREF(P&O)}$ for any matrices $M,N,O,P$ with compatible sizes and ambient fields, we get \begin{align*} \dists(\tau^{-1}(A \mid B),\tau^{-1}(C \mid D)) = \dists(\tau^{-1}(B \mid A),\tau^{-1}(D \mid C)). \end{align*} The statement in question is $\rk\sm{A&B\\C&D} \ge k+d/2$. \begin{enumerate} \item Using Inequality~\eqref{math:rkineq}, we obtain $\rk\sm{A&B\\C&D} \ge \rk\sm{A\\C}$ and $\dists(\tau^{-1}(A),\tau^{-1}(C)) \ge d$ is equivalent to $\rk\sm{\RREF(A)\\\RREF(C)} = \rk\sm{A\\C} \ge k+d/2$. \item Since $A=C$ of full rank, we have $\rk\sm{A&B\\C&D}=\rk\sm{A&B\\A&D}=\rk\sm{A&B\\0&D-B}=\rk\sm{I&0\\0&D-B}=k+\rk(D-B)$ and the definition of the rank-distance concludes this case. \item Here, we use Lemma~\ref{lem:dhds}, Equality~\eqref{math:dheq}, Inequality~\eqref{math:dhlb}, and let $[v]$ ($\{v\}$) denote the first $r$ (last $s$) entries in a vector $v$ so that $v=([v] \mid \{v\})$, $\weight([\pivot(A \mid B)]) = \rk A$, and $\weight([\pivot(C \mid D)]) = \rk C$. Hence, \begin{align*} &\dists(\tau^{-1}(A \mid B),\tau^{-1}(C \mid D)) \\\ge &\disth(\pivot(\tau^{-1}(A \mid B)),\pivot(\tau^{-1}(C \mid D))) \\= &\disth(\pivot(A \mid B),\pivot(C \mid D)) \\= &\disth([\pivot(A \mid B)],[\pivot(C \mid D)]) \\+&\disth(\{\pivot(A \mid B)\},\{\pivot(C \mid D)\}) \\\ge &|\weight([\pivot(A \mid B)])-\weight([\pivot(C \mid D)])| \\+&|\weight(\{\pivot(A \mid B)\})-\weight(\{\pivot(C \mid D)\})| \\= &|\rk A - \rk C|+|(k-\rk A) - (k-\rk C)| \\= &2|\rk A - \rk C| \end{align*} shows that $|\rk A - \rk C| \ge d/2$ implies the minimum distance. \end{enumerate} \end{proof} Lemma~\ref{lem:main} can of course be generalized to at least two blocks and this is used in Theorem~\ref{theo:generalized_linkage_multipleblocks}. \begin{lemma}\label{lem:main_multipleblocks} Let $k,m,n_i$ be positive integers and $A_i,B_i \in \mathbb{F}_q^{k \times n_i}$ be matrices such that $\rk(A_1 \mid \ldots \mid A_m) = \rk(B_1 \mid \ldots \mid B_m) = k$ ($1 \le i \le m$). If \begin{enumerate} \item $\rk A_i = \rk B_i = k$ and $d \le \dists(\tau^{-1}(A_i),\tau^{-1}(B_i))$, \item $A_i = B_i$ and $\rk A_i = k$ and $d/2 \le \distr((A_1 \mid \ldots \mid A_{i-1} \mid A_{i+1} \mid \ldots \mid A_m),(B_1 \mid \ldots \mid B_{i-1} \mid B_{i+1} \mid \ldots \mid B_m))$ or \item $d/2 \le |\rk A_i - \rk B_i|$, \end{enumerate} for some $i \in \{1,\ldots,m\}$, then \begin{align*} d \le& \dists(\tau^{-1}(A_{1} \mid \ldots \mid A_{m}),\tau^{-1}(B_{1} \mid \ldots \mid B_{m})) . \end{align*} \end{lemma} \begin{proof} Since \begin{align*} \rk\sm{ A_{\pi(1)} \mid \ldots \mid A_{\pi(m)} \\ B_{\pi(1)} \mid \ldots \mid B_{\pi(m)} } = \rk\sm{ A_{1} \mid \ldots \mid A_{m} \\ B_{1} \mid \ldots \mid B_{m} } \end{align*} for any permutation $\pi$ on $\{1,\ldots,m\}$ we can by Equation~\eqref{math:dsrank} assume that $i=1$. Then all three statements follow by the corresponding three statements in Lemma~\ref{lem:main} using $r = n_1$, $s = n_2 + \ldots + n_m$, $A = A_1$, $B = (A_{2} \mid \ldots \mid A_{m})$, $C = B_1$, and $D = (B_{2} \mid \ldots \mid B_{m})$. \end{proof} We use Lemma~\ref{lem:main} to generalize Theorem~\ref{theo:linkageHK} and Theorem~\ref{theo:linkageCHWX_trivialimprovement} in a single construction. \begin{theorem}\label{theo:generalized_linkage} Let $d/2,k,r,s,t$ be integers with $2 \le d/2 \le k \le (r+s)/2$, $k \le r$, $k \le s+t$, and $0 \le t \le k-d/2$. Let $\mathcal{A}$ be an $(r,\#\mathcal{A},d;k)_q$ CDC and $\mathcal{B}$ be an $(s+t,\#\mathcal{B},d;k)_q$ CDC. Let $\mathcal{M}$ be a $(k \times s,\#\mathcal{M},d/2)_q$ RMC and $\mathcal{R}$ be a $(k \times (r-t),\#\mathcal{R},d/2;k-d/2-t)_q$ RMC. Then \begin{align*} &\{\tau^{-1}(\tau(A) \mid M) : A \in \mathcal{A}, M \in \mathcal{M}\} \\\cup&\{\tau^{-1}(R \mid \tau(B)) : R \in \mathcal{R}, B \in \mathcal{B}\} \end{align*} is an $(r+s,\#\mathcal{A} \cdot \#\mathcal{M} + \#\mathcal{R} \cdot \#\mathcal{B},d;k)_q$ CDC. In particular, \begin{align*} &A_q(r+s,d;k) \\\ge& A_q(r,d;k) \cdot M(q,k,s,d/2) \\+& A_q(s+t,d;k) \cdot \Lambda(q,k,r-t,d/2,k-d/2-t). \end{align*} \end{theorem} \begin{proof} The distinctness of codewords follows from the minimum distance. Let $A,A' \in \mathcal{A}$ and $M,M' \in \mathcal{M}$. If $A=A'$, then~(2) in Lemma~\ref{lem:main} shows $d \le \dists(\tau^{-1}(\tau(A) \mid M),\tau^{-1}(\tau(A') \mid M'))$, else, i.e., $A \ne A'$, then~(1) in Lemma~\ref{lem:main} shows the same statement. Let $R,R' \in \mathcal{R}$ and $B,B' \in \mathcal{B}$. If $B=B'$, then~(2) in Lemma~\ref{lem:main} shows $d \le \dists(\tau^{-1}(R \mid \tau(B)),\tau^{-1}(R' \mid \tau(B')))$, else, i.e., $B \ne B'$, then~(1) in Lemma~\ref{lem:main} shows the same statement. Let $A \in \mathcal{A}$, $M \in \mathcal{M}$, $R \in \mathcal{R}$, and $B \in \mathcal{B}$. By $[\tau(B)]$ ($\{\tau(B)\}$) we denote the first $t$ (last $s$) columns of $\tau(B)$, in particular $\tau(B) = ([\tau(B)] \mid \{\tau(B)\})$. Then, the condition in~(3) in Lemma~\ref{lem:main} is $d/2 \le |\rk(\tau(A))-\rk(R \mid [\tau(B)])| = |k-\rk(R \mid [\tau(B)])| = k-\rk(R \mid [\tau(B)]) \Leftrightarrow \rk(R \mid [\tau(B)]) \le k-d/2$. By the choice of $\mathcal{R}$, we have $\rk R \le k-d/2-t$ and $\rk [\tau(B)] \le t$, so that the statement follows with Inequality~\eqref{math:rkineq}. \end{proof} If $\mathcal{A}=\{\tau^{-1}(I)\}$, $r=k$, and $t=0$ is chosen in Theorem~\ref{theo:generalized_linkage}, we get Theorem~\ref{theo:linkageCHWX_trivialimprovement} as special case and if $\mathcal{R}=\{0\}$ and $t=k-d/2$, we get Theorem~\ref{theo:linkageHK} as special case. This fits also in the framework of Kurz~\cite{kurz2019note}, cf. Theorem~\ref{theo:linkageK}, providing an alternative proof of Theorem~\ref{theo:generalized_linkage}. \begin{lemma}\label{lem:Bimproved} Let $d/2,k,r,s,t$ be integers with $2 \le d/2 \le k$, $0 \le r$, $k \le s+t$, and $0 \le t \le \min\{k-d/2,r\}$. Let $\mathcal{B}$ be an $(s+t,\#\mathcal{B},d;k)_q$ CDC, $\mathcal{R}$ be a $(k \times (r-t),\#\mathcal{R},d/2;k-d/2-t)_q$ RMC, and $W=\tau(0 \mid I)$ of dimension $s$ in $\mathbb{F}_q^{r+s}$. Then $\dim(W \cap \tau^{-1}(R \mid \tau(B)) ) \ge d/2$ for all $R \in \mathcal{R}$ and $B \in \mathcal{B}$. In particular, \begin{align*} &B_q(r+s,s,d;k) \\\ge& A_q(s+t,d;k) \cdot \Lambda(q,k,r-t,d/2,k-d/2-t). \end{align*} \end{lemma} \begin{proof} By $[\tau(B)]$ ($\{\tau(B)\}$) we denote the first $t$ (last $s$) columns of $\tau(B)$, in particular $\tau(B) = ([\tau(B)] \mid \{\tau(B)\})$. Then we have with Inequality~\eqref{math:rkineq} \begin{align*} &\dim(W \cap \tau^{-1}(R \mid \tau(B)) ) \\=& \dim(W) \!+\! \dim(\tau^{-1}(R \mid \tau(B))) \!-\! \dim(W \!+\! \tau^{-1}(R \mid \tau(B)) ) \\=& s + k - \dim(\tau^{-1}(0 \mid 0 \mid I) + \tau^{-1}(R \mid [\tau(B)] \mid \{\tau(B)\}) ) \\=& s + k - \rk\sm{ 0 & 0 & I \\ R & [\tau(B)] & \{\tau(B)\} } \\=& s + k - \rk\sm{ 0 & 0 & I \\ R & [\tau(B)] & 0 } = k - \rk\sm{ R & [\tau(B)] } \\\ge& k - \rk R - \rk[\tau(B)] \ge k - (k-d/2-t) - (t) = d/2 . \end{align*} \end{proof} Then~(1) and~(2) in Theorem~\ref{theo:linkageK} together with Lemma~\ref{lem:Bimproved} imply Theorem~\ref{theo:generalized_linkage}. Independent to this paper, He developed in~\cite{he2019construction} a variation of the generalized linkage construction (Theorem~\ref{theo:generalized_linkage}). The construction~\cite[Theorem~2]{he2019construction} arises as special case of Theorem~\ref{theo:generalized_linkage} if $t=0$ and $\Lambda(q,k,r,d/2,k-d/2)$ is replaced by $\Delta(q,k,r,d/2,k-d/2)-1$. In particular, the lower bound provided by Theorem~\ref{theo:generalized_linkage} is strictly better than the lower bound provided by~\cite[Theorem~2]{he2019construction}. Furthermore, \cite[Corollary~1]{he2019construction} requires $k \ge d$, cf.~\cite[Section~4]{he2019construction}, in contrast to Theorem~\ref{theo:generalized_linkage}. In~\cite[Section~4]{he2019construction}, He asks for generalizations to $k \le d$ and our generalized linkage construction (Theorem~\ref{theo:generalized_linkage}) provides an answer. Note, that there are infinite families of parameters showing that the consideration of $t>0$ is justified. For example, consider $v=7=r+s$, $d=4$, and $k=3$. Then, the maximum cardinalities of the ingredients of the generalized linkage construction are well known and in the notation of Theorem~\ref{theo:generalized_linkage}: { \setlength{\tabcolsep}{5pt} \begin{tabular}{lll|llll|l} r&s&t&$\#\mathcal{A}$&$\#\mathcal{M}$&$\#\mathcal{R}$&$\#\mathcal{B}$&$A_q(7,4;3)\ge$\\ \hline 3&4&0&$1$&$q^8$&$[3]_q$&$1$&$q^8\!+\!q^2\!+\!q\!+\!1$\\ 3&4&1&$1$&$q^8$&$1$&$q^3\!+\!1$&$q^8\!+\!q^3\!+\!1$\\ 4&3&0&$1$&$q^6$&$[3]_q$&$1$&$q^6\!+\!q^2\!+\!q\!+\!1$\\ 4&3&1&$1$&$q^6$&$1$&$1$&$q^6\!+\!1$\\ 5&2&1&$q^3\!+\!1$&$q^3$&$1$&$1$&$q^6\!+\!q^3\!+\!1$\\ \end{tabular} } Here, we use $A_q(5,4;2)=q^3+1$ by~\cite{MR404010}, (1), and (4) of Theorem~\ref{theo:lambdaequalities}. In particular, the largest code constructed by the generalized linkage construction has cardinality $q^8+q^3+1$, uses $r=3$, $s=4$, and $t=1$, and its cardinality is strictly larger than the codes arising by choosing different parameters. Independently to this paper, Cossidente, Kurz, Marino, and Pavese developed in~\cite[Lemma~4.1]{coss2019combining} a generalization of \cite[Theorem~4.1]{chen2019new} having the property that neither one of Theorem~\ref{theo:generalized_linkage} and \cite[Lemma~4.1]{coss2019combining} is a special case of the other. In the following theorem, we generalize both constructions in a single construction. \begin{theorem}\label{theo:generalized_linkage_multipleblocks} Let $d/2,k,m,n_i,t_i$ be integers with $2 \le m$, $2 \le d/2 \le k \le (\sum_{i=1}^{m} n_i)/2$, $k \le n_i+t_i$, and $0 \le t_i \le \min\{k-d/2,n_{i-1}\}$ ($2 \le i$), $t_1=0$ ($1 \le i \le m$). Let \begin{itemize} \item $\tau^{-1}(\mathcal{C}_i)$ be $(n_i+t_i,C_i,d;k)_q$ CDCs, \item $\mathcal{M}_i$ be $(k \times n_i,M_i,d/2)_q$ RMCs, \item $\mathcal{R}_i$ be $(k \times n_i,R_i,d/2;k-d/2)_q$ RRMCs, \item $\mathcal{S}_i$ be $(k \times (n_i-t_{i+1}),S_i,d/2;k-d/2-t_{i+1})_q$ RRMCs ($i < m$), and \item $\mathcal{S}_{m}$ be a set of size $S_m=1$ consisting of an empty matrix \end{itemize} for all $i \in \{1,\ldots,m\}$, then \begin{align*} \bigcup_{i=1}^{m} \{ \tau^{-1}( r_1 \mid \ldots \mid r_{i-2} \mid s_{i-1} \mid c_i \mid m_{i+1} \mid \ldots \mid m_{m} ) \\ : r_j \in \mathcal{R}_j \quad (1 \le j \le i-2), s_{i-1} \in \mathcal{S}_{i-1}, c_i \in \mathcal{C}_i, \\ m_w \in \mathcal{M}_w \quad (i+1 \le w \le m) \} \end{align*} is a $(\sum_{i=1}^{m} n_i,N,d;k)_q$ CDC with \begin{align*} N = \sum_{i=1}^{m} C_i \cdot S_i \cdot \prod_{j=1}^{i-2} R_j \cdot \prod_{w=i+1}^{m} M_w . \end{align*} In particular, \begin{align*} &A_q(\textstyle{\sum_{i=1}^{m} n_i},d;k) \ge A_q(n_1,d;k) \\&\cdot \prod_{w=2}^{m} M(q,k,n_w,d/2)+\sum_{i=2}^{m} A_q(n_i+t_i,d;k) \\&\cdot \Lambda(q,k,n_{i-1}-t_{i},d/2,k-d/2-t_{i}) \cdot \\&\prod_{j=1}^{i-2} \Lambda(q,k,n_j,d/2,k-d/2) \cdot \prod_{w=i+1}^{m} M(q,k,n_w,d/2). \end{align*} \end{theorem} \begin{proof} The distinctness of codewords follows from the minimum distance. Let $\tau^{-1}( r_1 \mid \ldots \mid r_{i-2} \mid s_{i-1} \mid c_i \mid m_{i+1} \mid \ldots \mid m_{m} )$ and $\tau^{-1}( r_1' \mid \ldots \mid r_{i-2}' \mid s_{i-1}' \mid c_i' \mid m_{i+1}' \mid \ldots \mid m_{m}' )$ be two distinct codewords in the same subcode. If $c_i = c_i'$, then (1) in Lemma~\ref{lem:main_multipleblocks} implies the minimum distance using the minimum subspace distance of $\mathcal{C}_i$. Else, by distinctness, there is a $1 \le j \le i-2$ with $r_j \ne r_j'$ or $s_{i-1} \ne s_{i-1}'$ or there is a $i+1 \le w \le m$ with $m_w \ne m_w'$. We abbreviate all cases in $x \ne x'$ for $x \in \{r_j,s_{i-1},m_w\}$. Then, by the minimum rank distance and Inequality~\eqref{math:rkineq}, we have $d/2 \le \distr(x,x') \le \distr( (r_1 \mid \ldots \mid r_{i-2} \mid s_{i-1} \mid m_{i+1} \mid \ldots \mid m_{m}), (r_1' \mid \ldots \mid r_{i-2}' \mid s_{i-1}' \mid m_{i+1}' \mid \ldots \mid m_{m}') )$ so that (2) in Lemma~\ref{lem:main_multipleblocks} concludes this case. Let $\tau^{-1}( r_1 \mid \ldots \mid r_{i-2} \mid s_{i-1} \mid c_{i} \mid m_{i+1} \mid \ldots \mid m_{m} )$ and $\tau^{-1}( r_1' \mid \ldots \mid r_{i'-2}' \mid s_{i'-1}' \mid c_{i'}' \mid m_{i'+1}' \mid \ldots \mid m_{m}' )$ be two distinct codewords in different subcodes corresponding to $i$ and $i'$, we use without loss of generality $i < i'$. Define $x$ as $(s_{i'-1}' \mid [c_{i'}'])$ if $i+1 = i'$, where $[c_{i'}']$ is the matrix consisting of the leftmost $t_i$ columns of $c_{i'}'$, and as $r_i'$ if $i+2 \le i'$. In the first case, we have $\rk x \le \rk(s_{i'-1}') + \rk([c_{i'}']) \le (k-d/2-t_i) + (t_i)$ by Inequality~\eqref{math:rkineq}, so that $\rk x \le k-d/2$ in both cases. Then we have $|\rk c_i - \rk x| = \rk c_i - \rk x \ge k - (k-d/2) = d/2$ and (3) in Lemma~\ref{lem:main_multipleblocks} concludes this case. \end{proof} This relates to other constructions as follows. If we set $m=2$, we obtain Theorem~\ref{theo:generalized_linkage}. Using $t=0$, we get \cite[Lemma~4.1]{coss2019combining}. If $m=s+1$, $n_i=n$, $t_i=0$, $d=2(n-t)$, $k=n$, $\mathcal{C}_i = \{\tau^{-1}(I)\}$, we get \cite[Theorem~4.1]{chen2019new}. \section{New CDCs and better lower bounds}\label{sec:comparison} According to the numerical evidence of \url{http://subspacecodes.uni-bayreuth.de/}, the generalized linkage construction (Theorem~\ref{theo:generalized_linkage}) increases all known lower bounds on $A_q(v,4;4)$ for all listed parameters $q$ and $12 \le v$. In the setting of $q=2$, the previously best known lower bound $A_2(12,4;4) \ge 19\,664\,917$ is given by the improved linkage construction (Theorem~\ref{theo:linkageHK}), our new generalized linkage construction (Theorem~\ref{theo:generalized_linkage}) increases the bound to $A_2(12,4;4) \ge 19\,673\,821$, while the parallel linkage construction (Theorem~\ref{theo:linkageCHWX}) only creates codes of size $19\,297\,741$. Hence, we compare the sizes of CDCs with $v=12$ and $d=k=4$ constructed by these three constructions for all $q$. The size of the code constructed in Theorem~\ref{theo:generalized_linkage} using $r=8$ and $t=0$, so that $s=4$, is \begin{align*} &A_q(r,d;k) \cdot M(q,k,s,d/2) \\+&A_q(s+t,d;k) \cdot \Lambda(q,k,r-t,d/2,k-d/2-t) \\\ge&A_q(8,4;4) \cdot M(q,4,4,2) + A_q(4,4;4) \cdot \Delta(q,4,8,2,2) \\=&A_q(8,4;4) \cdot q^{4(4-2+1)} + 1 \cdot (1+\gaussmnum{4}{2}{q}(q^8-1)) \\=&A_q(8,4;4) \cdot q^{12} + 1+\gaussmnum{4}{2}{q}(q^8-1) \stepcounter{equation}\tag{\theequation}\label{math:eqGLq844} \\>&A_q(8,4;4) \cdot q^{12} +\gaussmnum{4}{2}{q}(q^8-1) \stepcounter{equation}\tag{\theequation}\label{math:eqm1GLq844} \\>&q^{12} \cdot q^{12} + q^4(q^8-1) \\=&q^{24}+q^{12}-q^4 \stepcounter{equation}\tag{\theequation}\label{math:lbGLq844} \end{align*} For the last inequality, we use Theorem~\ref{theo:lbAq844} and Lemma~\ref{lem:qbinestim}. We compare the generalized linkage construction to the parallel linkage construction. Unfortunately, the bound of Theorem~\ref{theo:upperboundLambda}, i.e., $\Lambda(q,4,4,2,2) \le q^3(q^7+q^6+q^5-q^4-q^3-q^2+1)$, is too weak to show this result in general. \begin{lemma}\label{lem:comparison_linkageCHWK} If $2 \le q$ is a prime power, $v=12$, and $d=k=4$, then Theorem~\ref{theo:generalized_linkage} constructs a larger code than Theorem~\ref{theo:linkageCHWX_trivialimprovement}, utilizing $\Delta$ instead of $\Lambda$ in the latter. \end{lemma} \begin{proof} The size of the code constructed in Theorem~\ref{theo:linkageCHWX_trivialimprovement} uses $s=8$ and is, with $\Lambda$ replaced by $\Delta$, \begin{align*} &M(q,k,s,d/2) + A_q(s,d;k) \cdot \Delta(q,k,k,d/2,k-d/2) \\=&M(q,4,8,2) + A_q(8,4;4) \cdot \Delta(q,4,4,2,2) \\=&q^{8(4-2+1)} + A_q(8,4;4) \cdot (1+\gaussmnum{4}{2}{q}(q^4-1)) \end{align*} Due to Equation~\eqref{math:eqGLq844}, we have \begin{align*} & q^{24} + A_q(8,4;4) (1+\gaussmnum{4}{2}{q}(q^4-1)) \\<& A_q(8,4;4) q^{12} + 1 + \gaussmnum{4}{2}{q}(q^8-1) \\\Leftrightarrow& q^{24} \!-\! 1\!-\!\gaussmnum{4}{2}{q}(q^8\!-\!1) < A_q(8,4;4) ( q^{12} \!-\! 1\!-\!\gaussmnum{4}{2}{q}(q^4\!-\!1) ) . \end{align*} Using the lower bound of $A_q(8,4;4)$ in Theorem~\ref{theo:lbAq844} we will prove \begin{align*} &\frac{q^{24} - 1 - \gaussmnum{4}{2}{q}(q^8-1)}{q^{12} - 1 - \gaussmnum{4}{2}{q}(q^4-1)} < q^{12}\!+\!q^2(q^2\!+\!1)^2(q^2\!+\!q\!+\!1)\!+\!1 \\\Leftrightarrow&\frac{q^{24} - 1 - (q^2+1)(q^2+q+1)(q^8-1)}{q^{12} - 1 - (q^2+1)(q^2+q+1)(q^4-1)} \\<&q^{12}+q^2(q^2+1)^2(q^2+q+1)+1 \\\Leftarrow&\frac{q^{24} - q^2 q^2(q^8-1)}{q^{12} - 1 - (q^2+1)(q^2+q+1)q^4} \\\le& q^{12}+q^2(q^2+1)^2(q^2+q+1) \end{align*} which is equivalent to the nonnegativity of \begin{align*} &(q^{12}+q^2(q^2+1)^2(q^2+q+1)) \\\cdot&(q^{12} \!-\! 1 \!-\! (q^2+1)(q^2+q+1)q^4) \!-\!(q^{24} \!-\! q^2 q^2(q^8\!-\!1)) \\=&q^{2}(q^{16}+q^{15}+q^{14}-q^{13}-5q^{12}-8q^{11}-13q^{10}-12q^{9} \\-&13q^{8}-8q^{7}-7q^{6}-3q^{5}-4q^{4}-2q^{3}-4q^{2}-q-1) \\\ge& q^2 \left(q^{16}+q^{15}+q^{14}-q^{13}-13\sum_{i=0}^{12} q^i \right) \\\ge& q^2(q^{16}+q^{15}+q^{14}-q^{13}-13q^{13}) \\=& q^{15}(q-2)(q^{2}+3q+7) . \end{align*} Since the last term is nonnegative for $2 \le q$, the statement follows. \end{proof} We compare the generalized linkage construction to the improved linkage construction. \begin{lemma}\label{lem:comparison_linkageHK} If $2 \le q$ is a prime power, $v=12$, and $d=k=4$, then Theorem~\ref{theo:generalized_linkage} constructs a larger code than Theorem~\ref{theo:linkageHK}. \end{lemma} \begin{proof} The size of the code constructed in Theorem~\ref{theo:linkageHK} using $t=2$, $s=12-r$, and $4 \le r \le 10$ is \begin{align*} &A_q(r,d;k) \cdot M(q,k,s,d/2) + A_q(s+k-d/2,d;k) \\=&A_q(r,4;4) \cdot M(q,4,12-r,2) + A_q(14-r,d;4) \\=&A_q(r,4;4) \cdot q^{\max\{12-r,4\}(\min\{12-r,4\}-1)} + A_q(14-r,4;4) \\=& \begin{cases} A_q(r,4;4) \cdot q^{3(12-r)} + A_q(14-r,4;4) & \text{if } 4 \le r \le 8 \\ A_q(r,4;4) \cdot q^{4(11-r)} + A_q(14-r,4;4) & \text{if } 9 \le r \le 10 \\ \end{cases} \end{align*} Using Lemma~\ref{lem:AqvdkEqAqvdvmk} and Theorem~\ref{theo:spread}, we have $A_q(4,4;4)=A_q(4,4;0)=1$, $A_q(5,4;4)=A_q(5,4;1)=1$, $A_q(6,4;4)=A_q(6,4;2)=[6]_q/[2]_q=q^4+q^2+1$, and $A_q(7,4;4)=A_q(7,4;3)$. For $4 \le x$, the Anticode bound in Theorem~\ref{theo:anticode} and Lemma~\ref{lem:qbinestim} imply $A_q(x,4;4) \le \gaussmnum{x}{4-2+1}{q}/\gaussmnum{4}{4-2+1}{q} < 4q^{3(x-3)}/q^3 = 4q^{3x-12}$. For $r=4$, Theorem~\ref{theo:linkageHK} yields a CDC of size $q^{24} + A_q(10,4;4)$. Then, comparing to Inequality~\eqref{math:eqm1GLq844}, i.e., \begin{align*} &A_q(8,4;4) \cdot q^{12} + \gaussmnum{4}{2}{q}(q^8-1) - q^{24} - A_q(10,4;4) \\\ge&(q^{12}+q^2(q^2+1)^2(q^2+q+1)+1) \cdot q^{12} \\+& (q^8-1)[4]_q[3]_q/[2]_q - q^{24} - [10]_q[9]_q[8]_q/([4]_q[3]_q[2]_q) \\=&q^{20}+q^{19}+2q^{18}+2q^{17}+2q^{16}-q^{14}-q^{13}-q^{12}-q^{11} \\-&q^{10}-q^{9}-2q^{8}-2q^{7}-3q^{6}-q^{5}-3q^{4}-2q^{3}-3q^{2}-q-2 \\>&q^{20}-3\sum_{i=0}^{14}q^i =q^{20}-3[15]_q >q^{20}-3q^{15} >0 , \end{align*} concludes this case. For $r=5$, Theorem~\ref{theo:linkageHK} yields a CDC of size $q^{21} + A_q(9,4;4) \le q^{21} + 4q^{15} \le q^{21} + q^{17} \le q^{22}$, so that Inequality~\eqref{math:lbGLq844} concludes this case. For $r=6$, Theorem~\ref{theo:linkageHK} yields a CDC of size $([6]_q/[2]_q) q^{18} + A_q(8,4;4) \le 2q^{22} + 4q^{12} \le q^{23} + q^{14} \le q^{24}$, so that Inequality~\eqref{math:lbGLq844} concludes this case. For $r=7$, Theorem~\ref{theo:linkageHK} yields a CDC of size $A_q(7,4;3) (q^{15}+1)$. Then, comparing to Inequality~\eqref{math:eqm1GLq844}, i.e., \begin{align*} &A_q(8,4;4) q^{12} + \gaussmnum{4}{2}{q}(q^8-1) - A_q(7,4;3) (q^{15}+1) \\\ge&q^{24} + (q^8-1)[4]_q[3]_q/[2]_q - (q^{15}+1)[7]_q[6]_q/([3]_q[2]_q) \\=&q^{24}-q^{23}-q^{21}-q^{20}-q^{19}-q^{18}-q^{17}-q^{15}+q^{12}+q^{11} \\+&2q^{10}+q^{9}-q^{6}-q^{5}-2q^{4}-2q^{3}-3q^{2}-q-2 \\>&q^{24}-q^{23}-q^{21}-3\sum_{i=0}^{20} q^i =q^{24}-q^{23}-q^{21}-3[21]_q \\>&q^{24}-q^{23}-4q^{21} =(q-2)(q^2+q+2)q^{21} \ge 0 , \end{align*} concludes this case. For $r=8$, Theorem~\ref{theo:linkageHK} yields a CDC of size $A_q(8,4;4) q^{12} + [6]_q/[2]_q$. Then, comparing to Inequality~\eqref{math:eqm1GLq844}, i.e., \begin{align*} &A_q(8,4;4) q^{12} + [6]_q/[2]_q \le A_q(8,4;4) \cdot q^{12} + \gaussmnum{4}{2}{q}(q^8-1) \\\Leftrightarrow&[6]_q/[2]_q \le (q^8-1)[4]_q[3]_q/[2]_q \\\Leftrightarrow&[6]_q \le (q^8-1)[4]_q[3]_q \\\Leftarrow&q^6-1 < q^8-1 , \end{align*} concludes this case. For $r=9$, Theorem~\ref{theo:linkageHK} yields a CDC of size $A_q(9,4;4) q^{8} + 1 \le 4q^{23} + 1$, so that Inequality~\eqref{math:lbGLq844} concludes this case for all $4 \le q$. Next, we use $(\prod_{i=1}^{\infty}(1-3^{-i}))^{-1} < 1.8$ in Lemma~\ref{lem:qbinestim}, so that $A_q(9,4;4) q^{8} + 1 \le 1.8q^{23} + 1$ and Inequality~\eqref{math:lbGLq844} concludes this case for $q=3$, too. Last, the Anticode bound is precisely $A_2(9,4;4) \le 52\,535$, so that $A_2(9,4;4) 2^{8} + 1 \le 13\,448\,961 < 16\,781\,296 = 2^{24} + 2^{12} -2^4$, concluding the case $r=9$ for all $q$. For $r=10$, Theorem~\ref{theo:linkageHK} yields a CDC of size $A_q(10,4;4) q^{4} + 1 \le 4q^{22} + 1 \le q^{24} + 1$, so that Inequality~\eqref{math:lbGLq844} concludes this case. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,935
{"url":"https:\/\/zbmath.org\/?q=an:1126.62013","text":"# zbMATH \u2014 the first resource for mathematics\n\nHigher-order asymptotic normality of approximations to the modified signed likelihood ratio statistic for regular models. (English) Zbl\u00a01126.62013\nSummary: Approximations to the modified signed likelihood ratio statistic are asymptotically standard normal with error of order $$n^{ - 1}$$, where $$n$$ is the sample size. Proofs of this fact generally require that the sufficient statistic of the model can be written as $$(\\widehat{\\theta}, a)$$, where $$\\widehat{\\theta}$$ is the maximum likelihood estimator of the parameter $$\\theta$$ of the model and $$a$$ is an ancillary statistic. This condition is very difficult or impossible to verify for many models. However, calculation of the statistics themselves does not require this condition. The goal of this paper is to provide conditions under which these statistics are asymptotically normally distributed to order $$n^{ - 1}$$ without making any assumption about the sufficient statistic of the model.\n\n##### MSC:\n 62F05 Asymptotic properties of parametric tests 62B05 Sufficient statistics and fields\nFull Text:\n##### References:\n [1] Bahadur, R. R. (1971). Some Limit Theorems in Statistics . SIAM, Philadelphia. \u00b7 Zbl\u00a00257.62015 [2] Barndorff-Nielsen, O. E. (1986). Inference on full or partial parameters based on the standardized signed log likelihood ratio. Biometrika 73 307\u2013322. JSTOR: \u00b7 Zbl\u00a00605.62020 [3] Barndorff-Nielsen, O. E. (1991). Modified signed log likelihood ratio. Biometrika 78 557\u2013563. JSTOR: \u00b7 Zbl\u00a01192.62052 [4] Barndorff-Nielsen, O. E. and Cox, D. R. (1994). Inference and Asymptotics. Chapman and Hall, London. \u00b7 Zbl\u00a00826.62004 [5] Barndorff-Nielsen, O. E. and Wood, A. T. A. (1998). On large deviations and choice of ancillary for $$p^*$$ and $$r^*$$. Bernoulli 4 35\u201363. \u00b7 Zbl\u00a00894.62026 [6] Bhattacharya, R. N. and Ranga Rao, R. (1976). Normal Approximation and Asymptotic Expansions . Wiley, New York. \u00b7 Zbl\u00a00331.41023 [7] Billingsley, P. (1986). Probability and Measure , 2nd ed. Wiley, New York. \u00b7 Zbl\u00a00649.60001 [8] Casella, G. and Berger, R. L. (1990). Statistical Inference . Duxbury, Belmont, CA. \u00b7 Zbl\u00a00699.62001 [9] Cox, D. R. and Reid, N. (1987). Parameter orthogonality and approximate conditional inference (with discussion). J. Roy. Statist. Soc. Ser. B 49 1\u201339. JSTOR: \u00b7 Zbl\u00a00616.62006 [10] DiCiccio, T. J. and Martin, M. A. (1993). Simple modifications for signed roots of likelihood ratio statistics. J. Roy. Statist. Soc. Ser. B 55 305\u2013316. JSTOR: \u00b7 Zbl\u00a00794.62014 [11] Huzurbazar, V. S. (1950). Probability distributions and orthogonal parameters. Proc. Cambridge Philos. Soc. 46 281\u2013284. \u00b7 Zbl\u00a00036.09304 [12] Huzurbazar, V. S. (1956). Sufficient statistics and orthogonal parameters. Sankhy\u0101 Ser. A 17 217\u2013220. \u00b7 Zbl\u00a00072.35805 [13] Jensen, J. L. (1995). Saddlepoint Approximations . Oxford Univ. Press. \u00b7 Zbl\u00a01274.62008 [14] Kolassa, J. E. (1997). Series Approximation Methods in Statistics , 2nd ed. Lecture Notes in Statist. 88 . Springer, New York. \u00b7 Zbl\u00a00877.62013 [15] Lee, E. T. and Wang, J. W. (2003). Statistical Methods for Survival Data Analysis. Wiley, Hoboken, NJ. \u00b7 Zbl\u00a01026.62103 [16] Perlman, M. D. (1972). On the strong consistency of approximate maximum likelihood estimators. Proc. Sixth Berkeley Symp. Math. Statist. Probab. 1 263\u2013281. Univ. California Press. Berkeley. \u00b7 Zbl\u00a00232.62004 [17] Reid, N. (1996). Likelihood and higher-order approximations to tail areas: A review and annotated bibliography. Canad. J. Statist. 24 141\u2013166. JSTOR: \u00b7 Zbl\u00a00858.62011 [18] Severini, T. A. (1999). An empirical adjustment to the likelihood ratio statistic. Biometrika 86 235\u2013247. JSTOR: \u00b7 Zbl\u00a00943.62016 [19] Skovgaard, I. M. (1981). Transformation of an Edgeworth expansion by a sequence of smooth functions. Scand. J. Statist. 8 207\u2013217. \u00b7 Zbl\u00a00474.62022 [20] Skovgaard, I. M. (1981). Edgeworth expansions of the distributions of maximum likelihood estimators in the general (non-i.i.d.) case. Scand. J. Statist. 8 227\u2013236. \u00b7 Zbl\u00a00477.62009 [21] Skovgaard, I. M. (1986). On multivariate Edgeworth expansions. Internat. Statist. Rev. 54 169\u2013186. JSTOR: \u00b7 Zbl\u00a00615.62018 [22] Skovgaard, I. M. (1996). An explicit large-deviation approximation to one-parameter tests. Bernoulli 2 145\u2013165. \u00b7 Zbl\u00a01066.62508 [23] Skovgaard, I. M. (2001). Likelihood asymptotics. Scand. J. Statist. 28 3\u201332. \u00b7 Zbl\u00a00965.62014 [24] Wald, A. (1949). Note on the consistency of the maximum likelihood estimator. Ann. Math. Statist. 20 595\u2013601. \u00b7 Zbl\u00a00034.22902\nThis reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.","date":"2021-09-20 19:40:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.806377649307251, \"perplexity\": 2658.2603106101624}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780057091.31\/warc\/CC-MAIN-20210920191528-20210920221528-00167.warc.gz\"}"}
null
null
AV Air & Sea Freight were founded in the year 2002 as forwarding agent. Today, the company offers a fully integrated service encompassing Freight Forwarding (Ocean & Air), Custom Clearance, Project Forwarding, Inland Transportation, Warehousing, Local Distribution, Express Services and much more. We operate our branch network in Malaysia and extensive overseas network. AV Air & Sea Freight has young, energetic, qualified, experienced and dedicated team members, their main goal is customer satisfaction and maintain highest service standards in all respects. AV Air & Sea Freight has supported over 500 satisfied customers, own office, warehouse area over 10,000 sq.ft. Spread throughout the country and fleet of own vehicles. Recognizing the need to constantly improve our human capital, we conduct training sessions regularly to enhance our competitiveness globally. AV Air & Sea Freight have come a long way in transforming itself into one of the top logistics companies in the region. AV AIR & SEA FREIGHT SDN. BHD.
{ "redpajama_set_name": "RedPajamaC4" }
1,616
<html> <head> <title> Inside Socialist Worker </title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <?php include "../../legacy-includes/Script.htmlf" ?> </head> <body bgcolor="#FFFFCC" text="000000" link="990000" vlink="660000" alink="003366" leftmargin="0" topmargin="0"> <table width="744" cellspacing="0" cellpadding="0" border="0"> <tr><td width="474"><a name="Top"></a><?php include "../../legacy-includes/TopLogo.htmlf" ?></td> <td width="270"><?php include "../../legacy-includes/TopAd.htmlf" ?> </td></tr></table> <table width="744" cellspacing="0" cellpadding="0" border="0"> <tr><td width="18" bgcolor="FFCC66"></td> <td width="108" bgcolor="FFCC66" valign=top><?php include "../../legacy-includes/LeftButtons.htmlf" ?></td> <td width="18"></td> <td width="480" valign="top"> <font face="Arial, Helvetica, sans-serif" size="2" color="#990000">August 26, 2005 | Issue 554</font><P> <font face="Arial, Helvetica, sans-serif" size="3" color="#990000"><b>FRONT AND BACK PAGES</b></font><p> <font face="Times New Roman, Times, serif" size="3"><b>Cindy Sheehan's vigil ignites a new wave of protest</b></font><br> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_06_TurningPoint.shtml">Turning point for the antiwar movement</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">Cindy Sheehan brought the antiwar struggle to George W. Bush's doorstep. And inspired by her example, activists across the country brought the movement back into the streets last week.<p> <blockquote> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_06_CampCasey.shtml">Why I traveled to Camp Casey</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">A member of Gold Star Families for Peace talks about her trip to the antiwar vigil in Crawford, Texas. </blockquote> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_12_Immigrants.shtml">A war on immigrants</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">The governors of Arizona and New Mexico who declared a "state of emergency" at the border were really declaring war on immigrants.<p> <font face="Arial, Helvetica, sans-serif" size="3" color="#990000"><b><i>SW </i>SPECIAL FEATURES</b></font><p> <font face="Times New Roman, Times, serif" size="3"><b>At the World Youth Festival in Caracas</b></font> <br> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_04_WFY.shtml">A show of solidarity with Venezuela</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">This year's World Youth Festival was a massive display of solidarity with the people of Venezuela and their struggle against U.S. imperialism.<p> <blockquote> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_04_GonzaloGomez.shtml">"The future depends on the class struggle"</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">Venezuelan socialist Gonzalo G&oacute;mez explains the political situation in the country today and the struggle ahead for the workers' movement. </blockquote> <font face="Times New Roman, Times, serif" size="3"><b>Antiwar MP announces U.S. speaking tour</b></font><br> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_05_Galloway.shtml">Mr. Galloway goes back to Washington</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">This September, British antiwar leader and Member of Parliament George Galloway will again put the U.S.-British war against Iraq on trial from this side of the Atlantic.<p> <font face="Arial, Helvetica, sans-serif" size="3" color="#990000"><b>WHAT WE THINK</b></font><p> <font face="Times New Roman, Times, serif" size="3"><b>In the vigilantes' camouflage or the politicians' fancy suits...</b></font><br> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_03_Immigrants.shtml">Standing up to their anti-immigrant lies</a></b></font> <br> <font face="Times New Roman, Times, serif" size="3">The declaration of a border "emergency" in Arizona and New Mexico is only the latest accent in a rising drumbeat of anti-immigrant rhetoric that grows louder by the day.<p> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_03_Gaza.shtml">How colonizers became ''victims'' in Gaza</a></b></font> <br> <font face="Times New Roman, Times, serif" size="3">The U.S. media portrayed the Israeli settlers ejected from Gaza--far-right fanatics who armed themselves to the teeth--as tragic victims of a "heartless" policy. <p> <font face="Arial, Helvetica, sans-serif" size="3" color="#990000"><b>NATIONAL NEWS</b></font> <p> <font face="Times New Roman, Times, serif" size="3"><b>Behind skyrocketing gas prices</b></font><br> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_07_GasPrices.shtml">Ripped off by the oil bosses</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">The biggest factor in gas price increases isn't a tightening global supply but oil company manipulation of refining capacity and inventories in the U.S.<p> <font face="Times New Roman, Times, serif" size="3"><b>Papers reveal Supreme Court nominee's contempt for women's rights</b></font><br> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_02_JohnRoberts.shtml">Could John Roberts be any more right wing?</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">Every few days has brought frightening new revelations about the Bush administration's pick to replace Sandra Day O'Connor on the Supreme Court.<p> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_02_BirthControl.shtml">Birth control bill vetoed in New York</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">Gov. George Pataki vetoed landmark legislation that would have made emergency contraception available without age or parental restrictions.<p> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_02_WalMart.shtml">Killed by guards at a Wal-Mart store</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">Stacy Clay Driver was chased into the parking lot of a Houston Wal-Mart by five "loss prevention employees," where he died of asphyxiation.<p> <font face="Arial, Helvetica, sans-serif" size="3" color="#990000"><b>INTERNATIONAL NEWS</b></font> <p> <font face="Times New Roman, Times, serif" size="3"><b>The killing and the cover-up</b></font><br> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_05_ShoottoKill.shtml">Shoot-to-kill scandal in Britain</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">Almost every fact that British police claimed about the murder of Brazilian electrician Jean Charles de Menezes on board a London subway train has turned out to be a lie.<p> <font face="Arial, Helvetica, sans-serif" size="3" color="#990000"><b>ON THE PICKET LINE</b></font> <p> <font face="Times New Roman, Times, serif" size="3"><b>High stakes for all unions in mechanics' strike</b></font><br> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_11_Northwest.shtml">Showdown at Northwest</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">Will organized labor stand by as Northwest Airlines--backed by Wall Street and the White House--tries to destroy the striking mechanics union?<p> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_11_LaborInBrief.shtml">Labor in brief</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">Caldwell Manufacturing; Madison Market Co-op<p> <font face="Arial, Helvetica, sans-serif" size="3" color="#990000"><b>NEWS OF OUR STRUGGLE</b></font> <p> <font face="Times New Roman, Times, serif" size="3"><b>Feds target Voices in the Wilderness</b></font><br> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_01_Voices.shtml">Punished for delivering aid to Iraqis</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">A federal judge this month ordered the human rights group Voices in the Wilderness to pay a $20,000 fine--for the "crime" of delivering humanitarian aid to the people of Iraq.<p> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_11_Pittsburgh.shtml">Pittsburgh police attack counter-recruitment protest</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">Using pepper spray, taser guns, retracting batons and dogs, Pittsburgh police attacked antiwar protesters as they demonstrated against military recruitment and the war on Iraq. <p> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_11_NewsAndReports.shtml">News and reports</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">Defend abortion rights; Stand up to anti-Arab racism; South Central farmers<p> <font face="Arial, Helvetica, sans-serif" size="3" color="#990000"><b>VIEWS AND VOICES</b></font><p> <font face="Times New Roman, Times, serif" size="3"><b>Under attack for defending the right to protest</b></font><br> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_08_SSPAppeal.shtml">Defend Scottish socialists</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">Four Scottish Socialist Party members who serve in the Scottish parliament have been issued a month-long ban for defending the right to protest at the G8 summit in Gleneagles.<p> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_08_ConnImmigration.shtml">A new excuse to target immigrants</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">Connecticut's program to combat "human trafficking" will give an official sanction to the growing immigrant hunt in the state and nation.<p> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_08_ViewsInBrief.shtml">Views in brief</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">Step back for the unions?; Opportunity for labor?; A different law for Blacks<p> <font face="Arial, Helvetica, sans-serif" size="3" color="#990000"><b>REVIEWS</b></font><p> <font face="Times New Roman, Times, serif" size="3"><b>A re-released documentary exposes Washington's war crimes in Vietnam</b></font><br> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_09_WinterSoldier.shtml">Winter Soldier</a></b></font><br> <font face="Times New Roman, Times, serif" size="3"><i>Winter Soldier </i>captures the testimony of Vietnam veterans, who detailed their experiences in the military and the horrors of the "American War," as the Vietnamese called it. <p> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_09_Rize.shtml">Dancing that gives voice to LA's rage</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">From the ashes of the 1992 Los Angeles Rebellion rose a style of dancing that articulated the frustration and anger of young, poor residents of South Central LA.<p> <font face="Times New Roman, Times, serif" size="4"><b><a href="554_09_System.shtml">System of a Down's furious challenge</a></b></font><br> <font face="Times New Roman, Times, serif" size="3">Once again, this socially conscious metal band has unleashed a record of musical and political fury that challenges both America's foreign policy and culture. <p> <?php include "../../legacy-includes/BottomNavLinks.htmlf" ?> <td width="12"></td> <td width="108" valign="top"> <?php include "../../legacy-includes/RightAdFolder.htmlf" ?> </td> </tr> </table> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
8,147
<div class="commune_descr limited"> <p> Ougney-Douvot est un village localisé dans le département des Doubs en Franche-Comté. Elle comptait 172 habitants en 2008.</p> <p>Le parc d'habitations, à Ougney-Douvot, était réparti en 2011 en dix appartements et 109 maisons soit un marché plutôt équilibré.</p> <p>À proximité de Ougney-Douvot sont situées les villes de <a href="{{VLROOT}}/immobilier/bretigney-notre-dame_25094/">Bretigney-Notre-Dame</a> située à 2&nbsp;km, 106 habitants, <a href="{{VLROOT}}/immobilier/grosbois_25298/">Grosbois</a> située à 3&nbsp;km, 246 habitants, <a href="{{VLROOT}}/immobilier/breconchaux_25088/">Breconchaux</a> située à 2&nbsp;km, 80 habitants, <a href="{{VLROOT}}/immobilier/sechin_25538/">Séchin</a> située à 1&nbsp;km, 125 habitants, <a href="{{VLROOT}}/immobilier/esnans_25221/">Esnans</a> à 3&nbsp;km, 45 habitants, <a href="{{VLROOT}}/immobilier/vennans_25599/">Vennans</a> située à 3&nbsp;km, 170 habitants, entre autres. De plus, Ougney-Douvot est située à seulement 21&nbsp;km de <a href="{{VLROOT}}/immobilier/besançon_25056/">Besançon</a>.</p> <p>Si vous pensez demenager à Ougney-Douvot, vous pourrez aisément trouver une maison à vendre. </p> </div>
{ "redpajama_set_name": "RedPajamaGithub" }
4,110
Cage and Tudor as Process Rob Casey I mean to isolate here one of the signature concepts ascribed to the music of John Cage, not only in the writings of historians and musicologists, but one to which the composer repeatedly linked hismusic: the concept of process. It is a noun that permeates much discourse surrounding the creative arts and perhaps paradoxically it might be that due to its ubiquity, musicologists feel little compulsion to clarify what they wish it to mean. It is, in a sense, a musicological problem that hides in plain sight. Composers in the 20th century began to use the term process, not as a mere descriptor of creative action but as a defining characteristic of their music. Cage was foremost amongst his contemporaries in this regard. The fact that the concept may constitute a differentiating stylistic feature of 20th century music is of less interest to me than the promiscuous coupling routinely engineered by commentators between the term and any number of disparate musical practices. However, that is less interesting still than the fact that Cage's own use of the term offers an opportunity to discuss, in a systematic way, many of the musicological problems that Cage's music raises and which it is the task of musicologists to try and answer. I shall explore the concept of process within the context of some of these issues. My thesis is that 'process' is habitually linked with the practice of Cage without sufficient regard its implications for our understanding of his music. Close analysis of the term will help elucidate some of the main features that characterise his work and may unearth tools that aid us in crafting answers to questions regarding authorship and ontology in the music of John Cage. 'If you begin, as I do, not with the notion of making objects, but the idea of making a process, and if that process is in fact, silent, which is to say the sounds are unwilled… then the silence takes on an entirely different significance. In other words, the music is evident constantly, whether there are sounds, or silences' (Cage, 1963). Contemporary Music Review E-pub ahead of print - 1 Mar 2017 Event Notation Graphic Text RCASEY Cage and Tudor as processAccepted author manuscript, 227 KB Dive into the research topics of 'Cage and Tudor as Process'. Together they form a unique fingerprint. John Cage Arts & Humanities 100% Tudor Arts & Humanities 86% Music Arts & Humanities 78% Musicologists Arts & Humanities 38% Composer Arts & Humanities 17% Sound Arts & Humanities 14% Creative Arts Arts & Humanities 14% Stylistic Features Arts & Humanities 13% 1 Citations - Sourced from Scopus Developing a Phenomenological Approach to Music Notation Casey, R., 31 Aug 2015, In: Organised Sound. 20, 2, p. 160-170 11 p. Eric Voegelin 100% Music Notation 96% Notation 58% Semiotics 38% Casey, R. (2017). Cage and Tudor as Process. Contemporary Music Review, 35(6), 670-685. https://doi.org/10.1080/07494467.2016.1282648 Casey, Rob. / Cage and Tudor as Process. In: Contemporary Music Review. 2017 ; Vol. 35, No. 6. pp. 670-685. @article{04c4f71bb3f646e49de5402ee22fd7f1, title = "Cage and Tudor as Process", abstract = "I mean to isolate here one of the signature concepts ascribed to the music of John Cage, not only in the writings of historians and musicologists, but one to which the composer repeatedly linked hismusic: the concept of process. It is a noun that permeates much discourse surrounding the creative arts and perhaps paradoxically it might be that due to its ubiquity, musicologists feel little compulsion to clarify what they wish it to mean. It is, in a sense, a musicological problem that hides in plain sight. Composers in the 20th century began to use the term process, not as a mere descriptor of creative action but as a defining characteristic of their music. Cage was foremost amongst his contemporariesin this regard. The fact that the concept may constitute a differentiating stylistic feature of 20th century music is of less interest to me than the promiscuous coupling routinely engineered by commentators between the term and any number of disparate musical practices. However, that is less interesting stillthan the fact that Cage{\textquoteright}s own use of the term offers an opportunity to discuss, in a systematic way, many of the musicological problems that Cage{\textquoteright}s music raises and which it is the task of musicologists to try and answer. I shall explore the concept of process within the context of some of these issues. My thesis is that {\textquoteleft}process{\textquoteright} is habitually linked with the practice of Cage without sufficient regard its implications for our understanding of his music. Close analysis of the term will help elucidate some of the main features that characterise his work and may unearth tools that aid us in crafting answers to questions regarding authorship and ontology in the music of John Cage.{\textquoteleft}If you begin, as I do, not with the notion of making objects, but the idea of making a process, and if that process is in fact, silent, which is to say the sounds are unwilled… then the silence takes on an entirely different significance. In other words, the music is evident constantly, whether there are sounds, or silences{\textquoteright} (Cage, 1963).", keywords = "John Cage, David Tudor, Process, Event Notation, Experimental, Graphic Text, Performance Practice", author = "Rob Casey", journal = "Contemporary Music Review", Casey, R 2017, 'Cage and Tudor as Process', Contemporary Music Review, vol. 35, no. 6, pp. 670-685. https://doi.org/10.1080/07494467.2016.1282648 Cage and Tudor as Process. / Casey, Rob. In: Contemporary Music Review, Vol. 35, No. 6, 01.03.2017, p. 670-685. T1 - Cage and Tudor as Process AU - Casey, Rob N2 - I mean to isolate here one of the signature concepts ascribed to the music of John Cage, not only in the writings of historians and musicologists, but one to which the composer repeatedly linked hismusic: the concept of process. It is a noun that permeates much discourse surrounding the creative arts and perhaps paradoxically it might be that due to its ubiquity, musicologists feel little compulsion to clarify what they wish it to mean. It is, in a sense, a musicological problem that hides in plain sight. Composers in the 20th century began to use the term process, not as a mere descriptor of creative action but as a defining characteristic of their music. Cage was foremost amongst his contemporariesin this regard. The fact that the concept may constitute a differentiating stylistic feature of 20th century music is of less interest to me than the promiscuous coupling routinely engineered by commentators between the term and any number of disparate musical practices. However, that is less interesting stillthan the fact that Cage's own use of the term offers an opportunity to discuss, in a systematic way, many of the musicological problems that Cage's music raises and which it is the task of musicologists to try and answer. I shall explore the concept of process within the context of some of these issues. My thesis is that 'process' is habitually linked with the practice of Cage without sufficient regard its implications for our understanding of his music. Close analysis of the term will help elucidate some of the main features that characterise his work and may unearth tools that aid us in crafting answers to questions regarding authorship and ontology in the music of John Cage.'If you begin, as I do, not with the notion of making objects, but the idea of making a process, and if that process is in fact, silent, which is to say the sounds are unwilled… then the silence takes on an entirely different significance. In other words, the music is evident constantly, whether there are sounds, or silences' (Cage, 1963). AB - I mean to isolate here one of the signature concepts ascribed to the music of John Cage, not only in the writings of historians and musicologists, but one to which the composer repeatedly linked hismusic: the concept of process. It is a noun that permeates much discourse surrounding the creative arts and perhaps paradoxically it might be that due to its ubiquity, musicologists feel little compulsion to clarify what they wish it to mean. It is, in a sense, a musicological problem that hides in plain sight. Composers in the 20th century began to use the term process, not as a mere descriptor of creative action but as a defining characteristic of their music. Cage was foremost amongst his contemporariesin this regard. The fact that the concept may constitute a differentiating stylistic feature of 20th century music is of less interest to me than the promiscuous coupling routinely engineered by commentators between the term and any number of disparate musical practices. However, that is less interesting stillthan the fact that Cage's own use of the term offers an opportunity to discuss, in a systematic way, many of the musicological problems that Cage's music raises and which it is the task of musicologists to try and answer. I shall explore the concept of process within the context of some of these issues. My thesis is that 'process' is habitually linked with the practice of Cage without sufficient regard its implications for our understanding of his music. Close analysis of the term will help elucidate some of the main features that characterise his work and may unearth tools that aid us in crafting answers to questions regarding authorship and ontology in the music of John Cage.'If you begin, as I do, not with the notion of making objects, but the idea of making a process, and if that process is in fact, silent, which is to say the sounds are unwilled… then the silence takes on an entirely different significance. In other words, the music is evident constantly, whether there are sounds, or silences' (Cage, 1963). KW - John Cage KW - David Tudor KW - Process KW - Event Notation KW - Experimental KW - Graphic Text KW - Performance Practice UR - https://pure.ulster.ac.uk/en/publications/cage-and-tudor-as-process JO - Contemporary Music Review JF - Contemporary Music Review Casey R. Cage and Tudor as Process. Contemporary Music Review. 2017 Mar 1;35(6):670-685. https://doi.org/10.1080/07494467.2016.1282648
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,233
{"url":"https:\/\/engineering.stackexchange.com\/questions\/14129\/what-are-the-singular-values-of-a-dynamic-system-and-how-are-they-calculated-in\/20995","text":"# What are the Singular Values of a dynamic system and how are they calculated in the *sigma* function in Matlab?\n\nAccording to Matlab documentation:\n\nsigma plots the singular values of the frequency response of a dynamic system.\n\nbode creates a Bode plot of the frequency response of a dynamic system.\n\nI am reasonably familiar with Bode plots and dynamic systems but I don't understand what the singular values of the system are or how they are calculated. Are they related to the magnitudes of the system (which can be outputted by bode)?\n\nFor reference, I have a discrete state-space system which I am attempting to model as a reduced order system. One of the papers I have been using to accomplish this mentions using the sigma function to compare the two models. The paper is 'Dynamic Mode Decomposition with Control' by J.L.Proctor.\n\nAny help with understanding singular values or this sigma function would be appreciated.\n\n\u2022 IMO Singular values and the Moore-Penrose pseudo-inverse should really be covered in Linear Algebra courses, but often they are not. The singular value decomposition of any (not necessarily square!) matrix $A$ is $A = U \\Sigma V^T$, where $U$ and $V$ are orthogonal matrices and $\\Sigma$ is a diagonal matrix of the Singular values, which are always $>= 0$. $U$ and $V$ are the eigenvectors of the square matrices $A^TA$ and $AA^T$. Numerical algorithms for calculating the SVD are similar to algorithms for finding eigenvalues and vectors. See linear algebra textbooks for more detail. \u2013\u00a0alephzero Mar 8 '17 at 13:22\n\u2022 Thanks for answering. Do you know how this relates to the sigma function? As in, which matrix is being decomposed when it says 'singular values of the frequency response of a dynamic system.' \u2013\u00a0Joe Mar 8 '17 at 13:29\n\nFor a MIMO system $y(s) = G(s)d(s)$, with $m$ inputs and $l$ outputs. Consider a fixed frequency $\\omega$ where $G(j\\omega)$ is a constant $l \\times m$ complex matrix. For the sake of simplicity the matrix $G(j\\omega)$ is written as $G$.\n\nIn short, the singular value decomposition (SVD) states that any matrix $G$ may be decomposed into an input rotation $V$, a scaling matrix $\\Sigma$ and an output rotation $U$\n\n$$G = U \\Sigma V^H$$ where $V^H$ denotes the conjugate transpose or Hermitian transpose of matrix $V$. Furthermore,\n\n\u2022 $U$ is an $l \\times l$ unitary matrix of output singular vectors, $u_i$;\n\u2022 $V$ is an $m \\times m$ unitary matrix of input singular vectors, $v_i$;\n\u2022 $\\Sigma$ is an $l \\times m$ diagonal matrix with $k = \\min\\lbrace l,m\\rbrace$ non-negative singular values, $\\sigma_i$.\n\nThe gain of matrix $G$ in the $i$'th direction (from input $v_i$ to output $u_i$) is given by the singular value $\\sigma_i$.\n\nHence, for a SISO system, the singular value $\\sigma$ describes the amplification from the input to the output on frequency $\\omega$. This information can also be obtained from the Bode magnitude response.\n\nThat is why the MATLAB documentation of the function sigma states:\n\nThe singular value response of a SISO system is identical to its Bode magnitude response.","date":"2021-02-26 18:57:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8148002028465271, \"perplexity\": 206.25439457834796}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-10\/segments\/1614178357935.29\/warc\/CC-MAIN-20210226175238-20210226205238-00236.warc.gz\"}"}
null
null
\section{Introduction} Core-collapse supernovae (SNe) are the violent deaths of stars more massive than $\approx~7.5 {\rm M}_{\odot}$. They occur when nuclear burning reactions or electron degeneracy-pressure can no longer support the core against gravitational collapse. Either a neutron star or black hole is formed from the collapsing iron core and the outer layers of the star explode in a violent display. The nature of this display depends strongly on the final state of the progenitor star and the circumstellar medium; because there are many paths of stellar evolution the are many types of SNe. SNe are classified according to their observed spectra and lightcurves. The first differentiation is made by the absence or presence of hydrogen in a spectrum. If hydrogen emission lines exist then a SN is type II and type I otherwise. For type II SNe there are four subgroups: IIP when there is a plateau to the lightcurve lasting a few months. These are the most common type of SN. Type IIL have a linear decay to their lightcurve, IIn have narrow hydrogen emission lines in their spectrum and IIb initially appear to be type II until after a short time the hydrogen lines disappear and the SN becomes type Ib. The hydrogen free type I SNe have three separate subtypes. Type Ia are thought to be thermonuclear explosions of Chandrasekhar mass white dwarfs and are not considered further here, type Ib have helium lines in emission while type Ic have no helium lines. Single star models predicts that the type II SNe will be the result of stars between about $7$ to $27M_{\odot}$ \citep{H03,ETsne,arend} as these retain their hydrogen envelopes. Stars more massive than this lose all their hydrogen via stellar winds and therefore give rise to type Ib/c SNe. The only way to confirm this mapping is to observe the progenitors of SN. This is achieved by searching telescope archives to discover pre-explosion images. While the progenitors of SNe 1987A and 1993J where detected both were in nearby galaxies and both rare and unusual SNe. It was not until 1999 when the HST archive covered enough galaxies at sufficient depth and resolution that it was possible to start looking for progenitors of SN in a large number of galaxies at distances up to 20 megaparsecs. The first detection for the most common, type IIP, SN was for 2003gd \citep{2004Sci...303..499S}. This confirmed that their progenitors were red supergiants. With eight years of observations there are currently 32 SNe with useful pre-explosion images, 18 of these are type IIP (6 detections) and the remainder being types IIb and Ib/c. There are no detections of type Ib/c progenitors and with the growing number of the non-detections there is growing evidence that our standard view of mass loss during the late stages of evolution may be incorrect. In this proceedings we briefly highlight some of the main conclusions the mass range inferred for type IIP progenitors. We then discuss two interesting cases of two type Ib/c SNe, 2002ap and 2006jc that provide very stringent limits on the evolution of the most massive stars. \section{The population of type IIPs.} With the sample of 18 type IIP detections and non-detections it is possible to begin to characterise the population of the progenitors \citet{compilation}. The main important result for their study is that by using a maximum likelihood analysis it is possible to infer the minimum and maximum initial masses for type IIP SN progenitors. The initial mass range is between $7.5$ to $16.5M_{\odot}$, however the error bars are large and the range could be as large as $6$ to $18{\rm M}_{\odot}$. The initial mass depends strongly on the mixing and mass-loss scheme of the stellar models use to estimate an initial mass from a final luminosity. To remove this systematic it is better to work out the range of final helium core masses which is approximately $1.9$ to $6M_{\odot}$ in the STARS stellar models \citep{ETsne}. The minimum mass can be used to constrain models of convection in stellar models. Most models with helium cores at the lower end of this range experience second dredge-up and become AGB stars. In fact the stellar models used in this work do experience second dredge-up and we use the pre-dredge up models. Therefore something is required to prevent these stars from becoming AGB stars. It is possible for the most massive AGB stars to undergo SN, however their observational signature in pre-explosion images would be quite different to the red supergiants observed to date as they are cooler and therefore more luminous at infra-red wavelengths \citep{EMS07}. The maximum mass is due to one of two factors, because stars above this limit have lost most (or all) of their hydrogen and produce another type of SN or it is because the cores are massive enough to form a black hole and this also leads to another type of SN. In reality it is probably a combination of these factors. However the black hole explanation could be favoured as the inferred helium core mass is similar to that which is expected to produce a black hole rather than a neutron star at core-collapse \citep{H03,ETsne}. \section{But what about other types?} With only one detection for the other SN types there is little that can be said. If the upper limits that have been obtained on the progenitors' luminosity are compared to the luminosity of observed Wolf-Rayet stars, the suspected progenitors, it is apparent that the pre-explosion in all but one cases are not deep enough to have revealed the progenitors. For a detection it is a waiting game to determine for type type Ib/c SN to occur nearby and have deep pre-explosion images. The culprits for type Ib/c progenitors are Wolf-Rayet (WR) stars. These are evolved massive stars that have completed core hydrogen burning and lost (or in the process of losing) their hydrogen envelopes and are naked helium stars. These stars are also the preferred progenitors of long Gamma-ray bursts \citep{grbreview}. Despite this there are two interesting pre-explosion images for type Ib/c SNe. SN 2002ap, because the limit is so low that we were only just unable to detect the progenitor and are able to rule out single stars for the progenitor. While before SN 2006jc a luminous outburst transient was detected two years prior to the SN and this may have interesting consequences for our understanding of stellar-wind mass-loss. \subsection{2002ap} \citet{crockett} used previously unused deep pre-explosion images to reexamine the progenitor of SN 2002ap. The limit is the most stringent to date on any type Ib/c progenitor. They were able to rule out all standard single-star models are suggest that the most likely progenitor was a binary. Figure \ref{02ap1} shows the luminosity limit derived from the pre-explosion images on a theoretical Hertzsprung-Russell diagram. The grey stellar track is of a $17{\rm M}_{\odot}$ star that has its hydrogen envelope stripped by interaction via Roche-Lobe Overflow in a binary. The final stellar model has a final mass or $5{\rm M}_{\odot}$ that agrees with the mass inferred from modelling the SN lightcurve by \citet{Mazzali}. Also shown on the figure are the tracks of possible companion stars the mass of any companion star can also be limited by the pre- and post- explosion images. The mass of the companion star must have been less than $\approx 14M_{\odot}$. Any mass transfer must have been quite inefficient otherwise the secondary would have accreted a large amount of mass and be visible in the pre-explosion image. However there is a problem with this model, there is a large amount of helium in the progenitor model. While the amount of helium required for a type Ib SN is uncertain, over $0.5M_{\odot}$ should produce the signature of helium lines in the SN spectrum. The binary model has around $1M_{\odot}$ of helium in the envelope. Therefore there is some uncertainty in whether this is a reasonable progenitor model. There are solutions, one is that the star was more massive and underwent more severe stripping during the binary interaction. Another is that we underestimate the strength of stellar winds of Wolf-Rayet stars. Increase the mass-loss rates of a star by a factor of 2 - 3 would reduce the final mass of helium and therefore produce the required type Ic SN. A source of increased mass-loss rates could be rapid rotation [REF]. An alternative is that the mass-loss rates of WR stars are underestimated. This would be possible if the mass-loss rates are enhanced for only a short time of evolution. A possible mechanism has been suggested by \citet{petrovic} who discuss how helium star envelopes can become inflated. Inflated helium envelopes are caused by a bump in the opacity which causes the envelopes to become extended and the density profile of the envelope to invert so density increases with radius from the core. \citet{petrovic} suggest that this inflation may not be physical and note that by increasing the mass-loss rates it is possible to remove the inflated envelope and density inversion. Figure \ref{02ap2} shows a standard stellar model with the mass-loss rates of \citet{NL00} and a model which replaces these with the mass loss rates of \citet{petrovic} when the helium envelope becomes inverted. The figure shows that during the nitrogen rich WN evolution much more mass is lost than in the standard case. Further more most of the lost material is helium. The final model retains only $0.25M_{\odot}$ and the mass again agrees with the analysis of \citet{Mazzali}. With one non-detection it is not possible to decide if all WR mass-loss rates need to be increased but if the number of non-detections increase then the situation may begin to become more serious and the mass-loss rates will have to be closely examined. The only other scenario is that WR stars produce large amounts of dust a few years before core-collapse. This would reduce their apparent luminosity in pre-explosion images. \begin{figure} \includegraphics[height=.3\textheight]{fig02ap1.ps} \caption{Theoretical Hertzsprung-Russell diagram. The dashed line is the luminosity limit from the non-detection of any object in the pre-explosion image, the grey box represents the uncertainty in the limit for WR stars \citep{crockett}. The solid grey line is the evolution for a $17M_{\odot}$ star which has its hydrogen envelope removed in a binary interaction. The solid black line is the evolution of a $11.9M_{\odot}$ binary companion and the dotted line is the evolution of a $15.3M_{\odot}$ binary companion. At the time of the primary SN the latter companion would have been observed while the former would have remained undetected.} \label{02ap1} \end{figure} \begin{figure} \includegraphics[height=.3\textheight]{fig02ap2.ps} \caption{Theoretical Hertzsprung-Russell diagram of the evolution of initially $50M_{\odot}$ stars. The dashed line is the luminosity limit from the non-detection of a WR star the pre-explosion image, the grey box represents the uncertainty. The solid grey line is for a standard WR model \citep{EV06} and the solid black line a model using the mass-loss rates of \citep{petrovic}. } \label{02ap2} \end{figure} \subsection{2006jc} \citet{pastorello} and \citet{folye} describe a most unusual SN. SN 2006jc was discovered on 9th October 2006 in the galaxy, UGC4904. \citet{pastorello} found that it was spatially coincident with a bright optical transient that occurred in 2004. The SN itself was classified as a type Ib-n due to narrow helium lines in the spectra. The current interpretation is that the narrow helium lines are due to helium-rich material ejected by the star in a dramatic mass-loss episode that was observed as the optical transient. Then two years after this episode the star exploded as a type Ic SN and therefore the progenitor star had be stripped of helium, otherwise broad helium emission lines would have been observed in combination with the narrow lines. The problem with this interpretation is that if the progenitor was a single star then our understanding of Wolf-Rayet stars and their winds must be revised. The only single stars known to produce similar bright optical transients are LBV stars \citep{lbv1,lbv2}. However these stars tend to retain some hydrogen which would have been observed in the SN spectrum. It is conceivable that there are transition objects between LBVs and WR stars that would lead to the observed evolution for the progenitor of SN2006jc. These objects may be rare to get the right amount of mass-loss to remove all helium just before core-collapse. Although such objects would be more common at lower metallicities. An alternative to the LBV-like WR star is that the transient and mass-loss was due to a pair-production instability causing a dramatic mass-loss episode a few years before a normal core-collapse SN \citep{langerinprep}. The models of \citet{heger02} show that massive stars with little mass loss will experience such outbursts prior to core-collapse. The upper metallicity limit is uncertain. It is possible to estimate from stellar models to be below SMC metallicity. There is final a possibility with an observed analogue, the may have been progenitor a binary with a WR star and an evolved LBV star. The outburst was produced by the LBV star while the WR star exploded to produce the type Ic SN. There is one similar system in the SMC that has been observe to undergo outburst and at some point in the future may lead to a similar SN \citep{smclbvwrbin}. These systems are not as rare as might be first thought. If we assume LBV evolution happens after core-hydrogen burning is complete then any star which has a secondary companion that has completed core-hydrogen burning could be a possible LBV-WR system. Figure \ref{06jc} shows how the hydrogen burning lifetime compares to the total lifetime for stars with different initial masses. It is possible to see that the more massive stars can have a wider range of secondary masses for this kind of evolution. If we assume the mass ratio of binaries ($q=M_{\rm secondary}/M_{\rm primary}$) has a flat distribution and is independent of primary mass then 60\% of binaries with a $200M_{\odot}$ primary might be LBV-WR systems, while this reduces to only 20\% for a primary initially $50M_{\odot}$. The only way to distinguish between these three plausible models will be the rate of these events and the metallicities of the host galaxies. The binary scenario will be only weakly metallicity dependent, the pair-production outburst will be concentrated to low metallicities while the single star LBV/WR has an unknown metallicity dependence but if it is related to inflation of the WR star then is will be concentrated at higher metallicities. The total rate of type Ib-n SN is $< 4$\% of all type Ib/c SN. The rate of GRBs as a fraction of all Ib/c SNe is between 0.1 to 1\% [REF]. It is possible therefore that some GRBs may occur with 2006jc-like SNe. If this is the case then the circumburst medium inferred from the optical afterglow would be a constant density medium due to the changes in mass-loss rate and wind speed. There are a number of GRBs where this has been observed and is another possible solution when the CBM is a constant density rather than that expected of a free-wind density profile \citep{vanmarle,eldridge}. \begin{figure} \includegraphics[height=.3\textheight]{fig06jc1.ps} \caption{The lifetimes for massive single stars. The solid line is the total lifetime versus initial mass. The dashed line is the time of the end of core hydrogen burning and the dotted-dashed line is the time of the end of core helium burning. The three shaded regions show the range of secondary masses which have completed core hydrogen burning and are LBV candidates by the time the primary (the high mass edge of the region) experiences a SN.} \label{06jc} \end{figure} \section{Conclusions} While the progenitors of type IIP SNe are becoming well understood there is still great uncertainty over the progenitors of other SN types. It is becoming apparent that our understanding of WR mass-loss maybe incorrect and one possible reason that we do not observe more WR stars is they lose more mass than currently thought and that some of this mass-loss may be in luminous outburst such as the one that proceeded SN 2006jc. Or they produce copious amount of dust in the last few years before core-collapse. \begin{theacknowledgments} JJE would like to thank Stephen Smartt, Andrea Pastorello, Seppo Matilla, Mark Crockett and Dave Young for many discussions and making his time at Queen's University Belfast so enjoyable. \end{theacknowledgments} \bibliographystyle{mn2e}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,665
package com.slickqa.client.impl; import com.fasterxml.jackson.databind.ObjectMapper; import com.slickqa.client.SlickClient; import com.slickqa.client.apiparts.FilesApi; import com.slickqa.client.apiparts.FilesQueryApi; import com.slickqa.client.errors.SlickCommunicationError; import com.slickqa.client.errors.SlickError; import com.slickqa.client.model.StoredFile; import org.apache.tika.Tika; import javax.ws.rs.client.Entity; import javax.ws.rs.client.WebTarget; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response; import java.io.IOException; import java.io.InputStream; import java.nio.file.Files; import java.nio.file.Path; import java.nio.file.StandardOpenOption; import java.text.MessageFormat; import java.util.Date; /** * Created by jcorbett on 1/17/15. */ public class FilesApiPart extends ApiPart<StoredFile> implements FilesQueryApi, FilesApi { private Tika tika; public FilesApiPart(ParentApiPart parent) { super(StoredFile.class, parent); tika = new Tika(); } public FilesApiPart(ParentApiPart parent, ObjectMapper mapper) { super(StoredFile.class, parent, mapper); tika = new Tika(); } @Override public StoredFile createAndUpload(Path localPath) throws SlickError { if(localPath == null) { throw new SlickError("Invalid use of createAndUpload, local path must not be null."); } if(!Files.isReadable(localPath)) { throw new SlickError(String.format("File at path {0} is not readable", localPath)); } InputStream filestream; try { filestream = Files.newInputStream(localPath, StandardOpenOption.READ); } catch (IOException e) { throw new SlickError(String.format("Problem occured opening file {0} for reading.", localPath), e); } String mimetype = "application/octet-stream"; try { mimetype = tika.detect(localPath.toFile()); } catch (IOException e) { //TODO: log exception } return createAndUpload(localPath.getFileName().toString(), mimetype, filestream); } @Override public StoredFile createAndUpload(String filename, String mimetype, InputStream stream) throws SlickError { StoredFile storedFile = new StoredFile(); storedFile.setFilename(filename); storedFile.setMimetype(mimetype); storedFile.setUploadDate(new Date()); storedFile.setLength(0L); storedFile = create(storedFile); if(storedFile.getChunkSize() == null || storedFile.getChunkSize() <= 0) { storedFile.setChunkSize(262144); // 256k size chunks by default. The server *should* always set this } try { byte[] buffer = new byte[storedFile.getChunkSize()]; int last_read; do { last_read = stream.read(buffer, 0, storedFile.getChunkSize()); // we have to get the slick client because we have to have SlickClientImpl storedFile = getSlickClient().file(storedFile.getId()).addChunk(buffer); } while (last_read == storedFile.getChunkSize()); } catch (IOException e) { throw new SlickError(MessageFormat.format("Error reading from {0}", filename), e); } return storedFile; } @Override public StoredFile addChunk(byte[] data) throws SlickError { // we can't use makeRequest because it does JSON, and we need binary here, so this is copied code WebTarget target = getWebTargetForRequest().path("addchunk"); Response lastResponse = null; Exception lastException = null; for(int i = 0; i < 3; i++) { SlickClient.OpenConnectionCount.incrementAndGet(); lastResponse = target.request().method("POST", Entity.entity(data, MediaType.APPLICATION_OCTET_STREAM)); SlickClient.OpenConnectionCount.decrementAndGet(); if (lastResponse.getStatus() == 200) { try { return mapper.readValue(lastResponse.readEntity(String.class), type); } catch (IOException e) { lastException = e; } } try { if(i < 2) { Thread.sleep((i + 1) * 1000); } } catch(Exception e) { } } if(lastException != null) throw new SlickCommunicationError(target.getUri().toString(), lastResponse, lastException); else throw new SlickCommunicationError(target.getUri().toString(), lastResponse); } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,574
\subsection{Summary of the New Interpretation} The following results have been established in this Section: \begin{itemize} \item concepts of measurement and modified measurement methods have been introduced \item a concept of labelled population has been developed \item it has been shown that a labelled population with the modified measurement method can be considered in terms of a Joint Belief Distribution in the sense of DS-Theory, \item the process of "relabeling" of a labelled population has been defined and shown to be describable as a Belief Distribution. \item it has been shown that the relationship between the Belief Distributions of the resulting relabeled population, the basic population and the relabeling process can be expressed in terms of the Dempster-Rule-of-Independent-Evidence-Combination. \end{itemize} This last result can be considered as of particular practical importance. The interpretation schemata of DS Theory made by other authors suffered from one basic shortcoming: if we interpreted population data as well as evidence in terms of their DS schemes, and then combine the evidence with population data (understood as a Dempster type of conditioning) then the resulting belief function cannot be interpreted in terms of the population data scheme, with subsequent updating of evidence making thinks worse till even the weakest relation between the belief function and the (selected sub)population is lost In this paper we achieve a break-through: data have the same interpretation scheme after any number of evidential updating and hence the belief function can be verified against the data at any moment of DS evidential reasoning The above definition and properties of the generalized labeling process should be considered from a philosophical point of view. If we take one by one the objects of our domain, possibly labelled previously by an expert in the past, and assign a label independently of the actual value of the attribute of the object, then we cannot claim in any way that such a process may be attributed to the opinion of the expert. Opinions of two experts may be independent of one another, but they cannot be independent of the subject under consideration. This is the point of view with which most people would agree, and should the opinions of the experts not depend on the subject, then at least one of them may be considered as not expert This is exactly what we want to point at with our interpretation: the precise pinpointing at what kind of independence is assumed within the Dempster-Shafer theory is essential for its usability. Under our interpretation, the independence relies in trying to select a label for fitting to an object independently of whatever properties this object has (including its previous labeling). The distribution of labels for fitting is exactly identical from object to object. The point, where the dependence of object's labeling on its properties comes to appearance, is when the measurement method states that the label does not fit. Then the object is discarded. From philosophical point of view it means exactly that we try to impose our philosophy of life onto the facts: cumbersome facts are neglected and ignored. We suspect that this is exactly the justification of the name "belief function". It expresses not what we see but what we would like to see.\\ Our suspicion is strongly supported by the quite recent statement of Smets that "authors (of multiple interpretations in terms of upper lower probability models, inner and outer measures, random sets, probabilities of provability, probabilities of necessity etc.) usually do not explain or justify the dynamic component, that is, how updating (conditioning) is to be handled (except in some cases by defining conditioning as a special case of combination. So I (that is Smets) feel that these partial comparisons are incomplete, especially as all these interpretations lead to different updating rules. " Our interpretation explains both the static and dynamic component of the DST, and does not lead to any other but to the Dempster Rule of Combination, hence may be acceptable from the rigorous point of view of Smets. As in the light of Smets' paper \cite{Smets:92} we have presented the only correct probabilistic interpretation of the DS theory so far, we feel to be authorized to claim that our philosophical assessment of the DST is the correct one. \\ We have seen from the proofs of the theorems of this paper, that our interpretation may be called {\em a} true one. The paper of Smets \cite{Smets:92} permits us to claim that we have found {\em the} true interpretation \section{Belief from Data} As the DS-belief function introduced in this paper is defined in terms of frequentist measures, there exists a direct possibility of calculating the belief function from data. \\ It has to be assumed that we have a data set for which the measurements of type $M_l$ have been carried out for each singleton subset of the space of discourse $\Xi$. The results of these measurements may be available for example as a set-valued attribute associated with each object in such a way that the values actually appearing are those for which the singleton set tests were positive (i.e. TRUE). In this case if for an object the attribute $X$ has the value $X=A$ with $A \subseteq \Xi$ then this object increases the count for the DS-Mass Function $m(A)$ (and for no other m).\\ Whenever any statistical quantity is estimated from data, there exists some risk (uncertainty) about unseen examples. If we assume some significance levels, we can complete the estimation by taking the lower bounds as actual estimates of m's and shifting the remaining burden (summing up to 1) onto the $m(\Xi)$ just taking for granted that doubtful cases may be considered as matching all the measurements \section{Discussion} In the past, various interpretations have been sought for the Dempster-Shafer Bel-Functions. Two main steams of research were distinguished by Smets \cite{Smets:92}: probability related approaches and probability discarding approaches (the former disguised, the latter welcome by Smets). Let us make some comparisons with our interpretation and its underlying philosophy \subsection{Shafer and Smets} Shafer \cite{Shafer:90b} and Smets \cite{Smets:92} have made some strong statements in defense of the Dempster-Shafer theory against sharp criticism of this theory by its opponents as well as unfortunate users of the DST who wanted to attach it to the dirty reality (that is objectively given databases). Smets \cite{Smets:92} and also initially Shafer \cite{Shafer:76} insisted on Bels not being connected to any empirical measure (frequency, probability etc.) considering the domain of DST applications as the one where "we are ignorant of the existence of probabilities", and not one with "poorly known probabilities" (\cite{Smets:92}, p.324). The basic property of probability, which should be dropped in the DST axiomatization, should be the additivity of belief measures. Surely, it is easily possible to imagine situations where in the real life additivity is not granted: imagine we have had a cage with 3 pigs, we put into it 3 hungry lions two hours ago, how many animals are there now ? ($3+3 <6$). Or ten years ago we left one young man and one young woman on an island in the middle of the atlantic ocean with food and weapons sufficing for 20 years. How many human beings are there now ? ($1+1>2$). \\ The trouble is, however, that the objects stored in databases of a computer behave usually (under normal operation) in an additive manner. Hence the DST is simply disqualified for any reasoning within human collected data on real world, if we accept the philosophy of Smets and Shafer. The question may be raised at this point, what else practically useful can be obtained from a computer reasoning on the basis of such a DST. If the DST models, as Smets and Shafer claim, human behaviour during evidential reasoning, then it would have to be demonstrated that humans indeed reason as DST. We take e.g. 1000 people who never heard of Dempster-Shafer theory, briefly explain the static component, provide them with two opinions of independent experts and expect of them to answers what are their final beliefs. Should their answers correspond to results of the DST (at least converge toward them), then the computer, if fed with our knowledge, would be capable to predict our conclusions on a given subject. However, to my knowledge, no experiment like this has ever been carried out. Under these circumstances the computer reasoning with DST would tell us what we have to think and not what we think. But I don't suspect that anybody would be happy about a computer like this Hence, from the point of view of computer implementation the philosophy of Smets and Shafer is not acceptable Compare also Discussion in \cite{Halpern:92} on the subject Both of them felt a bit uneasy about a total loss of reference to any scientific experiment checking practical applicability of the DST and suggested some probabilistic background for decision making (e.g. the pigeonistic probabilities of Smets), but I am afraid that by these interpretations they fall precisely into the same pitfalls they claimed to avoid by their highly abstract philosophy As statistical properties of Shafer's \cite{Shafer:76} notion of evidence are concerned, sufficient criticism has been expressed by Halpern and Fagin (\cite{Halpern:92} in sections 4-5). Essentially it is pointed there at the fact that "the belief that represents the joint observation is equal to the combination is in general not equal to the combination of the belief functions representing the individual (independent) observations" (p.297). The other point raised there that though it is possible to capture properly in belief functions evidence in terms of probability of observations update functions (section 4 of \cite{Halpern:92}), it is not possible to do the same if we would like to capture evidence in terms of beliefs of observations update functions (section 5 of \cite{Halpern:92}). As Smets probabilistic interpretations are concerned, let us "continue" the killer example of \cite{Smets:92} on pages 330-331. "There are three potential killers, A, B, C. Each can use a gun or a knife. I shall select one of them, but you will not know how I select the killer. The killer selects his weapon by a random process with p(gun)=0.2 and p(knife)=0.8. Each of A, B, C has his own personal random device, the random devices are unrelated. ...... Suppose you are a Bayesian and you must express your "belief" that the killer will use a gun. The BF (belief function) solution gives $Bel(gun)=0.2 \times 0.2 \times 0.2=0.008$. ..... Would you defend 0.2 ? But this applies only if I select a killer with a random device ...... But I never said I would use a random device; I might be a very hostile player and cheat whenever I can. ... . So you could interpret bel(x) as the probability that you are sure to win whatever Mother Nature (however hostile) will do." \\ Yes, I will try to continue the hostile Mother Nature game here. For completeness I understand that $Bel(knife)=0.8 ^3=0.512$ and $Bel(\{gun,knife\})=1$. But suppose there is another I, the chief of gangster science fiction physicians, making decisions independly of the chief I of the killers. The chief I of physicians knows of the planned murder and has three physicians X,Y,Z. Each can either rescue a killed man or let him die. I shall select one of them, but you will not know how I select the physician. The physician, in case of killing with a gun, selects his attritude by a random process with $p(rescue|gun)=0.2$ and $p(let\ die|gun)=0.8$ and he lets the person die otherwise. Each of X, Y, Z has his own personal random device, the random devices are unrelated. ...... Suppose you are a Bayesian and you must express your "belief" that the physician will rescue if the killer will use a gun. The BF (belief function) solution gives $Bel_1(rescue|gun)=0.2^3=0.008$. $Bel_1(let\ die|gun)=0.8^3=0.512$, $Bel_1(\{recue,let \ die\}|gun)=1$. Also $Bel_2(let\ die|knife)=1$. As the scenarios for $Bel_1$ and $Bel_2$ are independent, let us combine them by the Dempster rule: $Bel_{12}=Bel_1 \oplus Bel_2$. We make use of the Smets' claim that "the de re and de dicto interpretations lead to the same results" (\cite{Smets:92}, p. 333), that is $Bel(A|B)=Bel(\lnot B \lor A)$. Hence $$m_{12}(\{(gun,let\ die),(knife,let\ die),(knife,rescue)\})=0.480$ $$m_{12}(\{(gun,rescue),(knife,rescue)\})=0.008$ $$m_{12}(\{(knife,rescue),(gun,let\ die)\})=0.512$ Now let us combine $Bel_{12}$ with the original $Bel$. We obtain:\\ $$m\oplus m_{12}((gun,let\ die)=0.008 \cdot 0.480+0.008 \cdot 0.512= 0.008 \cdot 0.992$ But these two unfriendly chiefs of gangster organizations can be extremely unfriendly and in fact your chance of winning a bet may be as bad as $0.008 \cdot 0.512$ for the event $(gun,let\ die)$. Hence the "model" proposed by Smets for understanding beliefs functions as "unfriendly Mother Nature" is simply wrong. If the Reader finds the combination of $Bel_2$ with the other Bels a little tricky, then for justification He should refer to the paper of Smets and have a closer look at all the other examples. \\ Now returning to the philosophy of "subjectivity" of Bel measures: Even if a human being may possess his private view on a subject, it is only after we formalize the feeling of subjectiveness and hence ground it in the data that we can rely on any computer's "opinion". We hope we have found one such formalization in this paper. The notion of labeling developed here substitutes one aspect of subjective human behaviour - if one has found one plausible explanation, one is too lazy to look for another one. So the process of labeling may express our personal attitudes, prejudices, sympathies etc. The interpretation drops deliberately the strive for maximal objectiveness aimed at by traditional statistical analysis. Hence we think this may be a promising path for further research going beyond the DS-Theory formalism Smets \cite{Smets:92} views the probability theory as a formal mathematical apparatus and hence puts it on the same footing as his view of the DST. However, in our opinion, he ignores totally one important thing: The abstract concept of probability has its real world counterpart of relative frequency which tends to behave approximately like the theoretical probability in sufficiently many experimental settings as to make the abstract concept of probability useful for practical life. And a man-in-the-street will expect of the DST to possess also such a counterpart or otherwise the DST will be considered as another version of the theory of counting devils on a pin-head.\\ Let us also have a look at interpretations disguised by Shafer and Smets (i.e. all the mentioned below):\\ \subsection{DST and Random Sets} The canonic random set interpretation \cite{Nguyen:78} is one with a statistical process over set instantiations. The rule of combination assumes then that two such statistically independent processes are run and we are interested in their intersections. This approach is not sound as empty intersection is excluded and this will render any two processes statistically dependent. We overcome this difficulty assuming in a straight forward manner that we are "walking" from population to population applying the Rule of Combination. Classical DS theory in fact assumes such a walk implicitly or it drops in fact the assumption that Bel() of the empty set is equal 0. In this sense the random set approaches may be considered as sound as ours. However, in many cases the applications of the model are insane. For example, to imitate the logical inference it is frequently assumed that we have a Bel-function describing the actual observed value of a predicate P(x), and a Bel-Function describing the implication "If P(x) then Q(x)" \cite{Ma:91}. It is assumed further that the evidence on the validity of both Bel's has been collected independently and one applies the DS-rule of combination to calculate the Bel of the predicate Q(x). One has then to assume that there is a focal m of the following expression: $m(\{(P(x) , Q(x)), (\lnot P(x) , Q(x)),(\lnot P(x) ,\lnot Q(x)) \})$ which actually means that with non-zero probability at the same time $P(x)$ and $\lnot P(x)$ hold for the same object as we will see in the following example: Let $Bel_1$ represent our belief in the implication, with focal points $$m_1(P(x) \rightarrow Q(x))=0.5, \ m_1(\lnot (P(x) \rightarrow Q(x)))=0.5, $$ Let further the independent opinion $Bel_2$ on P(x) be available in the form of focal points: $$m_2(P(x))=0.5, \ m_2(\lnot P(x))=0.5$$ Let $Bel_{12}=Bel_1 \oplus Bel_2$ represent the combined opinions of both experts. The focal points of $Bel_{12}$ are: $$ m_{12}(\{(P(x) , Q(x))\})=0.33, \ m_{12}(\{(P(x) ,\lnot Q(x))\})=0.33, $$ $$ m_{12}(\{(\lnot P(x) , Q(x)),(\lnot P(x) ,\lnot Q(x)) \})=0.33$$ $m_{12}(\{(P(x) , Q(x))\})=0.33$ makes us believe that there exist objects for which both P(x) and Q(x) holds. However, a sober (statistical) look at expert opinions suggests that all situations for which the implication $P(x) \rightarrow Q(x)$ holds, must result from falsity of $P(x)$, hence whenever Q(x) holds then $\lnot P(x)$ holds. These two facts combined mean that P(x) and its negation have to hold simultaneously. This is actually absurdity overseen deliberately. The source of this misunderstanding is obvious: the lack of proper definition of what is and what is not independent. Our interpretation allows for sanitation of this situation. We are not telling that the predicate and its negation hold simultaneously. Instead we say that for one object we modify the measurement procedure (set a label) in such a way that it, applied for calculation of $P(x)$, yields true and at the same time for another object, with the same original properties we make another modification of measurement procedure (attach a label to it) so that measurement of $\lnot P(x)$ yields also true, because possibly two different persons were enforcing their different beliefs onto different subsets of data.\\ Our approach is also superior to canonical random set approach in the following sense: The canonical approach requires knowledge of the complete random set realizations of two processes on an object to determine the combination of both processes. We, however, postpone the acquisition of knowledge of the precise instantiation of properties of the object by interleaving the concept of measurement and the concept of labeling process. This has a close resemblance to practical processing whenever diagnosis for a patient is made. If a physician finds a set of hypotheses explaining the symptoms of a patient, he will usually not try to carry out other testing procedures than those related to the plausible hypotheses. He runs clearly at risk that there exists a different set of hypotheses also explaining the patients's symptoms, and so a disease unit possibly present may not be detected on time, but usually the risk is sufficiently low to proceed in this way, and the cost savings may prove enormous. \\ \subsection{Upper and Lower Probabilities} Still another approach was to handle Bel and Pl as lower and upper probabilities \cite{Dempster:67}. This approach is of limited use as not every set of lower and upper probabilities leads to Bel/Pl functions \cite{Kyburg:87}, hence establishing a unidirectional relationship between probability theory and the DS-theory. Under our interpretation, the Bel/Pl function pair may be considered as a kind of interval approximations to some "intrinsic" probability distributions which, however, cannot be accessed by feasible measurements and are only of interest as a kind of qualitative explanation to the physical quantities really measured.\\ Therefore another approach was to handle them as lower/upper envelops to some probability density function realization \cite{Kyburg:87}, \cite{Fagin:91B}. However, the DS rule of combination of independent evidence failed. \\ \subsection{Inner and Outer Measures} Still another approach was to handle Bels/Pl in probabilistic structures rather than in probabilistic spaces \cite{Fagin:91}. Here, DS-rule could be justified as one of the possible outcomes of independent combinations, but no stronger properties were available. This is due to the previously mentioned fact that exclusion of empty intersections renders actually most of conceivable processes dependent. Please notice that under our interpretation no such ambiguity occurs. This is because we not only drop empty intersecting objects but also relabel the remaining ones so that any probability calculated afterwards does not refer to the original population So it was tried to drop the DS-rule altogether in the probabilistic structures, but then it was not possible to find a meaningful rule for multistage reasoning \cite{Halpern:92}. This is a very important negative outcome. As the Dempster-Shafer-Theory is sound in this respect and possesses many useful properties (as mentioned in the Introduction), it should be sought for an interpretation meeting the axiomatic system of DS Theory rather then tried to violate its fundamentals. Hence we consider our interpretation as a promising one for which decomposition of the joint distribution paralleling the results for probability distributions may be found based on the data.\\ \subsection{Rough Set Approach} An interesting alternative interpretation of the Dempster-Shafer Theory was found within the framework of the rough set theory \cite{Skowron:93}, \cite{Grzymala:91}. Essentially the rough set theory searches for approximation of the value of a decision attribute by some other (explaining) attributes. It usually happens that those attributes are capable only of providing a lower and upper approximation to the value of the decision attribute (that is the set of vectors of explaining attributes supporting only this value of the decision variable, and the set of vectors of explaining attributes supporting also this value of the decision variable resp.- for details see texts of Skowron \cite{Skowron:93} and Grzyma{\l}a-Busse \cite{Grzymala:91}). The Dempster Rule of combination is interpreted by Skowron \cite{Skowron:93b} as combination of opinions of independent experts, who possibly look at different sets of explanation attributes and hence may propose different explanations. The difference between our approach and the one based on rough sets lies first of all in the ideological background: We assume that the "decision attribute" is set-valued whereas the rough-set approach assumes it to be single-valued. This could have been overcome by some tricks which will not be explained in detail here But the combination step is here essential: If we assume that the data sets for forming knowledge of these two experts are exhaustive, then it can never occur that these opinions are contradictory. But the DST rule of combination uses the normalization factor for dealing with cases like this. Also the opinions of experts may have only the form of a simple (that is deterministic) support function. Hence, rough-set interpretation implies axioms not actually present in the DST. Hence rough set interpretation is on the one hand restrictive, and on the other hand not fully conforming to the general DST. From our point of view the DST would change the values of decision variables rather then recover them from expert opinions Here, we come again at the problem of viewing the independence of experts. The DST assumes some strange kind of independence within the data: the proportionality of the distribution of masses of sets of values among intersecting subsets weight by their masses in the other expert opinion. Particularly unhappy is the fact for the rough set theory, that given a value of the decision variable, the respective indicating vectors of explaining variables values must be proportionally distributed among the experts not only for this decision attribute value, but also for all the other decision attribute values that ever belong to the same focal point. Hence applicability of the rough set approach is hard to justify by a simple(, "usual" as Shafer wants) statistical test. On the other hand, statistical independence required for Dempster rule application within our approach is easily checked To demonstrate the problem of rough set theory with re combination of opinions of independent experts let us consider an examle of two experts having the combined explanatory attributes $E_1$ (for expert 1) and $E_2$ (for expert 2) both trying to guess the decision attribute $D$. Let us assume that $D$ takes one of two values: $d_1,d_2$, $E_1$ takes one of three values $e_{11}, e_{12}, e_{13}$, $E_2$ takes one of three values $e_{21}, e_{22}, e_{23}$. Furthermore let us assume that the rough set analysis of an exhaustive set of possible cases shows that the value $e_{11}$ of the attribute $E_1$ indicates the value $d_1$ of the decision attribute $D$, $e_{12}$ indicates $d_2$, $e_{13}$ indicates the set \{$d_1,d_2$\}, Also let us assume that the rough set analysis of an exhaustive set of possible cases shows that the value $e_{21}$ of the attribute $E_2$ indicates the value $d_1$ of the decision attribute $D$, $e_{22}$ indicates $d_2$, $e_{32}$ indicates the set \{$d_1,d_2$\}, From the point of view of bayesian analysis four cases of causal influence may be distinguished (arrows indicate the direction of dependence) $$E_1 \rightarrow D \rightarrow E_2$ $$E_1 \leftarrow D \leftarrow E_2$ $$E_1 \leftarrow D \rightarrow E_2$ $$E_1 \rightarrow D \leftarrow E_2$ From the point of view of bayesian analysis, in the last case attributes $E_1$ and $E_2$ have to be unconditionally independent, in the remaining cases: $E_1$ and $E_2$ have to be independent conditioned on $D$. Let us consider first unconditional independence of $E_1$ and $E_2$. Then we have tthat: $$(\Prob{\omega}{P} E_1(\omega)=e_{11} \land E_2(\omega)=e_{22}) = $$ $$= (\Prob{\omega}{P} E_1(\omega)=e_{11} ) \cdot (\Prob{\omega}{P} E_2(\omega)=e_{22}) >0 $ However, it is impossible that $(\Prob{\omega}{P} E_1(\omega)=e_{11} \land E_2(\omega)=e_{22}) > 0$ because we have to do with experts who may provide us possibly with information not specific enough, but will never provide us with contradictory information. We conclude that unconditional independence of experts is impossible.\\ Let us turn to independence of $E_1$ and $E_2$ if conditioned on $D$. We introduce the following denotation:\\ $$p_1 = \Prob{\omega}{P} D(\omega)=d_1$$ $$p_2 = \Prob{\omega}{P} D(\omega)=d_2$$ $$e_1'= \Prob{\omega}{ (D(\omega)=d_1)\land P} E_1(\omega)=e_{11}$$ $$e_3'= \Prob{\omega}{ (D(\omega)=d_1)\land P} E_1(\omega)=e_{13}$$ $$f_1'= \Prob{\omega}{ (D(\omega)=d_1)\land P} E_2(\omega)=e_{21}$$ $$f_3'= \Prob{\omega}{ (D(\omega)=d_1)\land P} E_2(\omega)=e_{23}$$ $$e_2"= \Prob{\omega}{ (D(\omega)=d_2)\land P} E_1(\omega)=e_{12}$$ $$e_3"= \Prob{\omega}{ (D(\omega)=d_2)\land P} E_1(\omega)=e_{13}$$ $$f_2"= \Prob{\omega}{ (D(\omega)=d_2)\land P} E_2(\omega)=e_{22}$$ $$f_3"= \Prob{\omega}{ (D(\omega)=d_2)\land P} E_2(\omega)=e_{23}$$ Let $Bel_1$ and $m_1$ be the belief function and the mass function representing the knowledge of the first expert, let $Bel_2$ and $m_2$ be the belief function and the mass function representing the knowledge of the second expert. Let $Bel_{12}$ and $m_{12}$ be the belief function and the mass function representing the knowledge contained in the combined usage of attributes $E_1,E_2$ if used for prediction of $D$ - on the grounds of the rough set theory. It can be easily checked that:\\ $$m_1(\{d_1\})=e_1' \cdot p_1,\ m_1(\{d_2\})=e_2" \cdot p_2,\ m_1(\{d_1,d_2\})=e_3' \cdot p_1, + e_3"' \cdot p_2$ $$m_2(\{d_1\})=f_1' \cdot p_1,\ m_2(\{d_2\})=f_2" \cdot p_2,\ m_2(\{d_1,d_2\})=f_3' \cdot p_1, + f_3"' \cdot p_2$ and if we assume the conditional independence of $E_1$ and $E_2$ conditioned on $D$, then we obtain $$m_{12}(\{d_1\})=e_1' \cdot f_1' \cdot p_1 + e_1' \cdot f_3' \cdot p_1 + e_3' \cdot f_1' \cdot p_1 $ $$m_{12}(\{d_2\})=e_2" \cdot f_2" \cdot p_2 + e_2" \cdot f_3" \cdot p_2 + e_3" \cdot f_2" \cdot p_2 $ $$m_{12}(\{d_1,d_2\})=e_3' \cdot f_3' \cdot p_1 + e_3" \cdot f_3" \cdot p_2$ However, the Dempster rule of combination would result in (c - normalization constant):\\ $$m_1\oplus m_2(\{d_1\})=c \cdot (e_1' \cdot f_1' \cdot p_1^2 + e_1' \cdot f_3' \cdot p_1^2 + e_1' \cdot f_3" \cdot p_1 \cdot p_2 + e_3' \cdot f_1' \cdot p_1^2 + e_3" \cdot f_1' \cdot p_1 \cdot p_2)$ $$m_1\oplus m_2(\{d_2\})=c \cdot (e_2" \cdot f_2" \cdot p_2^2 + e_2" \cdot f_3' \cdot p_1 \cdot p_2 + e_2" \cdot f_3" \cdot p_2^2 + e_3' \cdot f_2" \cdot p_1 \cdot p_2 + e_3" \cdot f_2" \cdot p_2^2 )$ $$m_1\oplus m_2(\{d_1,d_2\})=c \cdot e_3' \cdot f_3' \cdot p_1^2 + e_3" \cdot f_3" \cdot p_2^2 + e_3' \cdot f_3" \cdot p_1 \cdot p_2 + e_3" \cdot f_3' \cdot p_1 \cdot p_2)$ Obviously, $Bel_{12}$ and $Bel_1\oplus Bel_2$ are not identical in general. We conclude that conditional independence of experts is also impossible. Hence no usual staatistical indeperndence assumption is valid for the rough set interpretation of the DST. This fact points at where the difference between rough set interpretation and our interpretation lies in: in our interpretation, traditional statistical independence is incorporated into the Dempster's scheme of combination (labelling process) By the way, lack of correspondence between statistical independence and Dempster rule of combination is characteristic not only of the rough set interpretation, but also of most of the other ones. The Reader should read carefully clumsy statements of Shafer about DST and statistical independence in \cite{Shafer:90b} \subsection{General Remarks} The Dempster-Shafer Theory exists already over two decades. Though it was claimed to reflect various aspects of human reasoning, it has not been widely used in expert systems until recently due to the high computational complexity. Three years ago, however, an important paper of Shenoy and Shafer \cite{Shenoy:90} has been published, along papers of other authors similar in spirit, which meant a break-through for application of both bayesian and Dempster-Shafer theories in reasoning systems, because it demonstrated that if joint (bayesian or DS) belief distribution can be decomposed in form of a belief network than it can be both represented in a compact manner and marginalized efficiently by local computations. This fact makes them suitable as alternative fundamentals for representation of (uncertain) knowledge in expert system knowledge bases \cite{Henrion:90}. Reasoning in bayesian belief networks has been subject of intense research work also earlier \cite{Shachter:90}, \cite{Shenoy:90}, \cite{Pearl:86}, \cite{Pearl:88}. There exist methods of imposing various logical constraints on the probability density function and of calculating marginals not only of single variables but of complicated logical expressions over elementary statements of the type X =x (x belonging to the domain of the variable X ) \cite{Pearl:88}. There exist also methods determining the decomposition of a joint probability distribution given by a sample into a bayesian belief network \cite{Chow:68}, \cite{Rebane:89}, \cite{Acid:91}, \cite{Srinivas:90} It is also known that formally probability distributions can be treated as special cases of Dempster-Shafer belief distributions (with sinngleton focal points) \cite{Halpern:92} However, for application of DS Belief-Functions for representation of uncertainty in expert system knowledge bases there exist several severe obstacles. The main one is the missing frequentist interpretation of the DS-Belief function and hence neither a comparison of the deduction results with experimental data nor any quantitative nor even qualitative conclusions can be drawn from results of deduction in Dempster-Shafer-theory based expert systems \cite{Ma:91}. Numerous attempts to find a frequentist interpretation have been reported (e.g. \cite{Fagin:91}, \cite{Fagin:91B}, \cite{Grzymala:91}, \cite{Halpern:92}, \cite{Kyburg:87}, \cite{Shafer:90b}, \cite{Skowron:93}). But, as Smets \cite{Smets:92} states, they failed either trying to incorporate Dempster rule or when explaining the nature of probability interval approximation. The Dempster-Shafer Theory experienced therefore sharp criticism from several authors in the past \cite{Pearl:88}, \cite{Halpern:92}. It is suggested in those critical papers that the claim of DST to represent uncertainty stemming from ignorance is not valid. Hence alternative rules of combination of evidence have been proposed. However, these rules fail to fulfill Shenoy/Shafer axioms of local computation \cite{Shenoy:90} and hence are not tractable in practice. These failures of those authors meant to us that one shall nonetheless try to find a meaningful frequentist interpretation of DST compatible with Dempster rule of combination We have carefully studied several of these approaches and are convinced that the key for many of those failures (beside those mentioned by Halpern in \cite{Halpern:92}) was: (1) treating the Bel-Pl pair as an interval approximation and (2) viewing combination of evidence as a process of approaching a point estimation. In this paper we claim that the most reasonable treatment of Bel's Pl's is to consider them to be POINT ESTIMATES of probability distribution over set-valued attributes (rather then Interval estimates of probability distribution over single valued attributes). Of course, we claim also that Bel-Pl estimates by an interval some probability density function but in our interpretation that "intrinsic" probability density function is of little interest for the user. The combination of evidence represents in our interpretation manipulation of data by imposing on them our prejudices (rather then striving for extraction of true values) Under these assumptions a frequentionistically meaningful interpretation to the Bel's can be constructed, which remains consistent under combination of joint distribution with "evidence", giving concrete quantitative meaning to results of expert system reasoning. Within this interpretation we were able to prove the correctness of Dempster-Shafer rule. This means that this frequentist interpretation is consistent with the DS-Theory to the largest extent ever achieved. \section{Conclusions} \begin{itemize} \item According to Smets \cite{Smets:92} there has existed no proper frequentist interpretation of the Dempster-Shafer theory of evidence so far. \item In this paper a novel frequentist interpretation of the Dempster-Shafer-Theory has been found allowing for close correspondence between Belief and Plausibility functions and the real data \item This interpretation fits completely into the framework of Bel/Pl definitions and into the Dempster rule of combination of independent evidence relating for the first time in DST history this rule to plain statistical independence just overcoming difficulties of many alternative interpretations of the Dempster-Shafer-Theory. Hence this interpretation dismisses the claim of Smets \cite{Smets:92} that such an interpretation cannot exist \item It is distinguished by the fact of postponing the moment of measuring object properties behind combination of evidence leading even to dropping some costly measurements altogether \item The interpretation allows for subjective treatment of Bel's and Pl's as some approximations to unknown probability distribution of an intrinsic, but not accessible, attribute \item The introduced concept of labeled population may to some extent represent subjectivity in viewing probabilities \item This interpretation questions the common usage of the DST as a mean to represent and to reason with uncertainty stemming from ignorance. This view has been already shaken by works of Pearl \cite{Pearl:88} and Halpern and Fagin \cite{Halpern:92}. What our interpretation states clearly is that the DST should be viewed as a way to express unwillingness to accept objective facts rather than as a mean to express ignorance about them. Hence it should be called a theory of prejudices rather than a theory of evidence \end{itemize} Finally, I feel obliged to apologize and to say that all critical remarks towards interpretations of DST elaborated by other authors result from deviations of those interpretations from the formalism of the DST. I do not consider, however, a deviation from DST as a crime, because modifications of DST may and possibly have a greater practical importance than the original theory. The purpose of this paper was to shed a bit more light onto the intrinsic nature of pure DST and not to call for orthodox attitudes towards DST \section*{Acknowledgements} I am indebted to anonymous referees who greatly contributed to enhancement of the quality of presentation of this paper. \newcommand{\ReadingsIn}{G. Shafer, J. Pearl eds: Readings in Uncertain Reasoning, (ISBN 1-55860-125-2, Morgan Kaufmann Publishers Inc., San Mateo, California, 1990)} \section{Introduction} Dempster-Shafer theory of evidence has been found by many researchers very attractive as a way of modeling reasoning behaviour under uncertainty stemming from ignorance. It provides a framework for representation of certainty of a logical formula without necessity of expressing commitment to any of its consequences. E.g. we can express our 100 \% belief in fact that Tweedy's wife is either Mary or Jane and at the same time express our total ignorance to the fact who of them is actually his wife (zero belief attached to the statement "Mary is Tweedy's wife" and zero belief in "Jane is Tweedy's wife"). If a theory is to become of practical importance in expert systems application - as foundation for knowledge representation and reasoning, ar least the following conditions must be fulfilled \begin{itemize} \item there must exist an efficient method for reasoning within this framewor \item there must exist a clear correspondence between the contents of the knowledge base and the real worl \item there must be a clear correspondence between the reasoning method and some real world proces \item there must exist a clear correspondence between the results of the reasoning process and the results of the real world process corresponding to the reasoning process \end{itemize Only under such circumstances we can say that the expert system is helpful as it allows us either to predict or to follow retrospectively real world processes Dempster initiated the theory of evidence in his paper \cite{Dempster:67} and other works, and Shafer developed this theory in his book \cite{Shafer:76} and other publications. Though it became obvious that the DST (Dempster-Shafer Theory) captures many intuitions behind the human dealing with uncertainty (e.g. as mentioned above), it did not become a foundation for implementation of expert systems with uncertainty due to claimed high computational complexity \cite{Grzymala:91}. In the recent years, however, a number of efficient methods for dealing with DS reasoning have been developed - see e.g. \cite{Shenoy:90} and citations therein. So the first of the above mentioned conditions is met. Meeting of other conditions proved to be more complicated. Smets \cite{Smets:92} and also initially Shafer \cite{Shafer:76} insisted on Bels (measures of uncertainty in the DST) not being connected to any empirical measure (frequency, probability etc.) considering the domain of DST applications as the one where "we are ignorant of the existence of probabilities", and warn that the DST is "not a model for poorly known probabilities" (\cite{Smets:92}, p.324). The question may be raised, however, what practically useful can be obtained from a computer reasoning on the basis of such a DST. It would have to be demonstrated that humans indeed reason as DST. Then the computer, if fed with our knowledge, would be capable to predict our conclusions on a given subject. However, to my knowledge, no experiment confirming that humans actually use internally DST for reasoning under uncertainty has been carried out. Under these circumstances the computer reasoning with DST would tell us what we have to think and not what we think. Hence, from the point of view of computer implementation the position of Smets and Shafer is not acceptable The other category of DST interpretations, described by Smets as approaches assuming existence of an underlying probability distribution, which is only approximated by the Bels (called by him PXMY models), is represented by early works of Dempster \cite{Dempster:67}, papers of Kyburg \cite{Kyburg:87}, Fagin \cite{Fagin:91}, \cite{Fagin:91B},, Halpern \cite{Halpern:92}, Skowron \cite{Skowron:93}, Grzyma{\l}a-Busse \cite{Grzymala:91} and others. Both Smets \cite{Smets:92} and Shafer \cite{Shafer:90b} consider such approaches as inadequate as most of them give rise to contradictions and counter intuitiveness. As Smets states, "Far too often, authors concentrate on the static component (how beliefs are allocated?) and discover many relations between TBM (transferable belief model of Smets) and ULP (upper lower probability) models, inner and outer measures (Fagin and Halpern \cite{Fagin:89}), random sets (Nguyen \cite{Nguyen:78}), probabilities of provability (Pearl \cite{Pearl:88}), probabilities of necessity (Ruspini \cite{Ruspini:86}) etc. But these authors usually do not explain or justify the dynamic component (how are beliefs updated?), that is, how updating (conditioning) is to be handled (except in some cases by defining conditioning as a special case of combination). So I (that is Smets) feel that these partial comparisons are incomplete, especially as all these interpretations lead to different updating rules." (\cite{Smets:92}, pp. 324-325). Smets attributes this failure to the very nature of attempts of assigning a probabilistic interpretation. We disagree with Smets and will show in this paper that creation of a probabilistic interpretation of the DST incorporating the Dempster rule of combination is actually possible. However, this new interpretation indicates the need for a drastic change in viewing the Dempster rule: it does not accommodate evidence, but prejudices. How this statement is to be understood, will be visible later. Nonetheless our interpretation allows for assignment of an experimentally verifiable numerical meaning to a DS knowledge base, assigns a numerical meaning to the reasoning process (the DS rule of combination) and yields agreement between numerical empirical interpretation of the results of DS reasoning and results of a real world process. This means that we have an interpretation fitting formal interpretation of the DS theory to the largest extent ever achieved. Smets (\cite{Smets:92},p.327) subdivided the DST into two categories: a closed world category (as if excluding the possibility of contradictions in the "evidence") and an open world category of DST (as if allowing for this). Let us assume that two independent experts elicited their beliefs concerning the event A: both assigned beliefs of 0.7 to the event A, and beliefs of 0.3 to the event $\lnot$A. The open world DST will lead us to a combined belief in A of 0.5 and in $\lnot$A of 0.1. The closed world assumption on the other hand will assign a combined belief in A of 0.7 and in $\lnot$A of 0.3. I find it a dismaying property of a theory if collecting agreeing information from independent expert shall decline my belief in the opinions of both experts. Hence only closed world theories are subject of this paper We first recall the formal definition of the DS-Theory, then introduce some notation used throughout the rest of the paper. Subsequently we develop our interpretation of the joint belief distribution and of evidential updating. We conclude with a brief comparison of our interpretation with other attempts.\\ \section{Formal Definition of the Dempster-Shafer Theory of Evidence} Let us make the remark that if an object is described by a set of discrete attributes $X_1,X_2,...,X_n$ taking values from their respective domains $\Xi_1,\Xi_2,...,\Xi_n$ then we can think of it as being described by a complex attribute X having vector values, that is the domain $\Xi$ of X is equal: $$\Xi=\{(x_1,x_2,...,x_n) | x_i \in \Xi_i , i=1,...,n\}$$. So unless specified otherwise let us assume that we are talking of objects described by a single attribute X taking its values from the domain $\Xi$. We say that $\Xi$, the domain of X is our space of discourse spanned by the attribute X. We shall also briefly say that X is our space of discourse instead. For the purpose of this paper we define the Bel-function as follows (compare also \cite{Halpern:92}, \cite{Smets:92}, \cite{Shafer:90b}): \begin{df} \label{BelDef} The Belief Function in the sense of the DS-Theory is defined as Bel:$2^\Xi \rightarrow [0,1]$ with $\Xi=\Xi_1 \times Xi_2 \times ... \times \Xi_n$ being the space spanned by the attribute $X= X_1 \times X_2 \times \dots \times X_n$ with $$\forall_{A;A \subseteq \Xi} \quad Bel(A) = \sum_{B \subseteq A} m(B)$$ where m(A) is a Mass Function in the sense of the DS-Theory (see \rDef{mDef} below). \end{df} The function m is defined as \begin{df} \label{mDef} The Mass Function in the sense of the DS-Theory is defined as m:$2^\Xi \rightarrow [0,1]$ with $$ m(\emptyset)=0$$ $$ \sum_{A \in 2^\Xi} m(A)=1 $$ $$ \forall_{ A \in 2^\Xi } \quad m(A) \geq 0 $$. \end{df} \begin{df} Whenever $m(A) > 0$, we say that A is the focal point of the Bel-Function. \end{df} Let us also introduce the Pl-Function (Plausibility) as: \begin{df} The Plausibility Function in the sense of the DS-Theory is defined as $$\forall_{A; A \subseteq \Xi} \quad Pl(A) = 1-Bel(\Xi-A ) $$ \end{df} Beside the above definition a characteristic feature of the DS-Theory is the so-called DS-rule of combination of independent evidence: \begin{df} Let $Bel_{E_1}$ and $Bel_{E_2}$ represent independent information over the same space of discourse. Then: $$Bel_{E_1,E_2}=Bel_{E_1} \oplus Bel_{E_2}$$ defined as: $$m_{E_1,E_2}(A)=c \cdot \sum_{B,C; A= B \cap C} m_{E_1}(B) \cdot m_{E_2}(C)$$ (c - normalizing constant) represents the Combined Belief-Function of Two Independent Beliefs \end{df} \section{Denotation} F. Bacchus in his paper \cite{Bacchus:90} on axiomatization of probability theory and first order logic shows that probability should be considered as a quantifier binding free variables in first order logic expressions just like universal and existential quantifiers do. So if e.g. $\alpha(x)$ is an open expression with a free variable $x$ then $[\alpha(x)]_x$ means the probability of truth of the expression $\alpha(x)$. (The quantifier $[]_x$ binds the free variable $x$ and yields a numerical value ranging from 0 to 1 and meeting all the Kolmogoroff axioms). Within the expression $[\alpha(x)]_x$ the variable $x$ is bound. See \cite{Bacchus:90} on justification why other types of integration of probability theory and first order logic or propositional logic fail. Also for justification of rejection of the traditional view of probability as a function over sets. While sharing Bacchus' view, we find his notation a bit cumbersome so we change it to be similar to the universal and existential quantifiers throughout this paper. Furthermore, Morgan \cite{Morgan:91} insisted that the probabilities be always considered in close connection with the population they refer to. Bacchus' expression $[\alpha(x)]_x$ we rewrite as:\\ $\Prob{x}{P}\alpha(x)$ - the probability of $\alpha(x)]$ being true within the population P. The P (population) is a unary predicate with P(x)=TRUE indicating that the object x($\in \Omega$, that is element of a universe of objects) belongs to the population under considerations. If P and P' are populations such that $\forall_x P'(x)\rightarrow P(x)$ (that is membership in P' implies membership in P, or in other words: P' is a subpopulation of P), then we distinguish two cases:\\ case 1: $(\Prob{x}{P}P'(x))=0$ (that is probability of membership in P' with respect to P is equal 0) - then (according to \cite{Morgan:91} for any expression $\alpha(x)$ in free variable x the following holds for the population P': $(\Prob{x}{P'}\alpha(x))=1$\\ case 2: $(\Prob{x}{P}P'(x))>0$then (according to \cite{Morgan:91} for any expression $\alpha(x)$ in free variable x the following holds for the population P': $$(\Prob{x}{P'}\alpha(x))= \frac {\Prob{x}{P}(\alpha(x) \land P'(x))} {\Prob{x}{P}P'(x)}$$ We also use the following (now traditional) mathematical symbols:\\ $\forall_{x}\alpha(x)$ - always $\alpha(x)$ (universal quantifier) \\ $\exists_{x}\alpha(x)$ - there exists an x such that $\alpha(x)$ (existential quantifier) \\ \begin{tabular}{lp{7cm} $\alpha \land \beta$ & - logical AND of expressions\\ $\bigwedge_{B} \alpha(B)$ & - logical AND over all instantiations of the expression $\alpha(B)$ in free variable $B$\\ $\alpha \lor \beta$ & - logical OR of expressions\\ $\bigvee_{B} \alpha(B)$ & - logical OR over all instantiations of the expression $\alpha(B)$ in free variable $B$\\ $\lnot$ & - logical negation\\ $P \cap Q$ & - intersection of two sets\\ $P \cup Q$ & - union of two sets\\ \end{tabular \section{A New Interpretation of Belief Functions} The empirical meaning of a new interpretation of the DS Belief function will be explained by means of the following example: \begin{Bsp} \label{COOTexample} Let us consider a daily-life example. Buying a bottle of hair shampoo is not a trivial task from both the side of the consumer and the manufacturer. If the consumer arrives at the consciousness that the shampoos may fall into one of the four categories: high quality products (excellent for maintaining cleanness and health of the consumer) (H), moderate quality products (keeping just all Polish industry standards) (M), suspicious products (violating some industry standards) (S) and products dangerous for health and life (containing bacteria or fungi or other microbes causing infectious or invasive diseases, containing cancerogenous or poisonous substances etc.) (D), he has a hard time upon leaving his house for shopping. Clearly, precise chemical, biochemical and medical tests exist which may precisely place the product into one of those obviously exclusive categories. But the Citizen\footnote{The term "Citizen" was a fine socialist time descriptor allowing to avoid the cumbersome usage of words like "Mr.", "Mrs." and "Miss"} Coot\footnote{This family name was coined as abbreviation for "Citizen Of Our Town"} usually neither has a private chemical laboratory nor enough money to make use of required services. Hence Citizen Coot coins a personal set of "quality" tests $M ^1$ mapping the pair (bottle of shampoo, quality) into the set \{TRUE, FALSE\} (the letter O - object - stands for bottle of shampoo, H, M, S, D indicate quality classes: high, moderate, suspicious, dangerous): \\ \begin{enumerate \item If the shampoo is heavily advertised on TV then it is of high quality ($M ^1(O,\{H\})=TRUE$) and otherwise not ($M ^1(O,\{H\})=FALSE$). \item If the name of the shampoo was never heard on TV, but the bottle looks fine (pretty colours, aesthetic shape of the bottle), then the shampoo must be of moderate quality ($M ^1(O,\{M\})=TRUE$) and otherwise not ($M ^1(O,\{M\})=FALSE$) \item If the packaging is not fine or the date of production is not readable on the bottle or the product is out of date, but the shampoo smells acceptably otherwise then it is suspicious ($M ^1(O,\{S\})=TRUE$) and otherwise not ($M ^1(O,\{S\})=FALSE$) \item If either the packaging is not fine or the date of production is not readable on the bottle or the product is out of date, and at the same time the shampoo smells awfully, then it is dangerous ($M ^1(O,\{D\})=TRUE$ and otherwise not ($M ^1(O,\{D\})=FALSE$) \end{enumerate} Notice that the criteria are partially rational: a not fine looking bottle may in fact indicate some decaying processing of the shampoo or at least that the product remains for a longer time on the shelf already. Bad smell is usually caused by development of some bacteria dangerous for human health Notice also that test for high and moderate quality are enthusiastic, while the other two are more cautious. Notice that the two latter tests are more difficult to carry out in a shop than the leading two (the shop assistant would hardly allow to open a bottle before buying). Also, there may be no time to check whether the shampoo was actually advertised on TV or not (as the son who carefully watches all the running advertisements stayed home and does his lessons). Hence some simplified tests may be quite helpful: \begin{itemize} \item $M ^1(O,\{S,D\})$: If the packaging is not fine or the product is out of date or the production date is not readable then the product is either suspicious or dangerous ($M ^1(O,\{S,D\})=TRUE$ and otherwise not ($M ^1(O,\{D,S\})=FALSE$). \item $M ^1(O,\{H,M\})$: If the packaging looks fine, then the product is either of high or moderate quality ($M ^1(O,\{M,H\})=TRUE$ and otherwise not ($M ^1(O,\{M,H\})=FALSE$). \end{itemize} Clearly these tests are far from being precise ones, but for the Citizen Coot no better tests will be ever available. What is more, they are not exclusive: if one visits a dubious shop at a later hour, one may buy a product meeting both $M ^1(O,\{H\})$ and $M ^1(O,\{D\})$ as defined above ! Let us assume we have two types of shops in our town: good ones (G) and bad ones (B). (Let $M ^2:\Omega \times 2^\{G,B\} \rightarrow \{TRUE, FALSE\}$ indicate for each shampoo in which shop type it was available. Further, let $M ^3:\Omega \times 2^{\{ H, M, S, D \} \times \{G,B\}} \rightarrow \{TRUE, FALSE\}$ indicate for each shampoo both its quality and the type of shop it was available from. Let clearly $M ^1(O,Quality) \land M ^2(O,Shop)=M ^3(O,Quality \times Shop)$.\\ The good shops are those with new furniture, well-clothed shop assistants. Bad ones are those with always dirty floor or old furniture, or badly clothed shop assistants. Clearly, again, both shop categories may be considered (nearly) exclusive as seldom well clothed shop assistants do not care of floors. Let us assume we have obtained the statistics of shampoo sales in our town presented in Table \ref{statshamp}:\\ \begin{table} \label{statshamp} \caption{Sold shampoos statistics} \begin{center} \begin{tabular}{r|rrr|r} Quality true for & Shop type B & G & B,G & Total\\ \hlin H & 20 & 100 & 70 & 190 \\ M & 80 & 100 & 110 & 290 \\ S & 50 & 5 & 15 & 70 \\ D & 10 & 1 & 3 & 14 \\ H,S & 15 & 10 & 14 & 39 \\ M,S & 30 & 20 & 25 & 75 \\ H,D & 8 & 2 & 3 & 13 \\ M,D & 15 & 7 & 10 & 32 \\ \hlin total & 228 & 245 & 250 & 723 \\ \end{tabular} \end{center} \end{table} Rows and columns are marked with those singleton tests which were passed (e.g. in the left upper corner there are 20 shampoo bottles sold in an undoubtedly bad shop and having exclusively high quality, that is for all those bottles (O) $M ^1(O,\{H\})=TRUE$, $M ^1(O,\{M\})=FALSE$, $M ^1(O,\{S\})=FALSE$, $M ^1(O,\{D\})=FALSE$, and $M ^2(O,\{B\})=TRUE$, $M ^2(O,\{G\})=FALSE$.) The measurement of $M ^1(O,\{H\})$ would yield TRUE for 190+39+13 =242 bottles and FALSE for the remaining 581 bottles, the measurement of $M ^1(O,\{D\})$ would yield TRUE for 14+13+32=59 bottles, and FALSE for the remaining 664 bottles. The measurement $M ^1(O,\{S,D\})$ will turn true in 70+14+ 39+75+ 13+12 =343 cases and FALSE in the remaining 480 cases \end{Bsp} In general let us assume that we know that objects of a population can be described by an intrinsic attribute X taking exclusively one of the n discrete values from its domain $\Xi=\{v_1,v_2,...,v_n\}$ . Let us assume furthermore that to obtain knowledge of the actual value taken by an object we must apply a measurement method (a system of tests) $M$ \begin{df} \label{MDef} $X$ be a set-valued attribute taking as its values non-empty subsets of a finite domain $\Xi$. By a measurement method of value of the attribute $X$ we understand a function: $$M: \Omega \times 2^\Xi \rightarrow \{TRUE,FALSE\}$$. where $\Omega$ is the set of objects, (or population of objects) such that \begin{itemize} \item $ \forall_{\omega; \omega \in \Omega} \quad M(\omega,\Xi)=TRUE$ (X takes at least one of values from $\Xi$) \item $ \forall_{\omega; \omega \in \Omega} \quad M(\omega,\emptyset)=FALSE$ \item whenever $M(\omega,A)=TRUE$ for $\omega \in \Omega$, $A \subseteq \Xi$ then for any $B$ such that $A \subset B$ $M(\omega,B)=TRUE$ holds, \item whenever $M(\omega,A)=TRUE$ for $\omega \in \Omega$, $A \subseteq \Xi$ and if $card(A)>1$ then there exists $B$, $B \subset A$ such that $M(\omega,B)=TRUE$ holds. \item for every $\omega$ and every $A$ either $M(\omega,A)=TRUE$ or $M(\omega,A)=FALSE$ (but never both). \end{itemize} $M(\omega,A)$ tells us whether or not any of the elements of the set A belong to the actual value of the attribute $X$ for the object $\omega$ \end{df} The measuring function M(O,A), if it takes the value TRUE, states for an object O and a set A of values from the domain of X that the X takes for this object (at least) one of the values in A. It makes sense to talk of such measuring function assigning truth values to sets of values of an attribute if it is possibly cheaper to measure M(O,A) than to measure M(O,B) whenever $B \subset A$ and we are interested in avoiding measuring M(O,B) whenever possible, that is whenever measuring M(O,A) suffices. For example, measuring pH-value with a pH-meter may turn out to be more expensive than one with litmus paper, at the advantage of a higher precision. The above definition assumes that this measurement method is superset- and subset-consistent that is: Whenever $M(object,A)=TRUE$ the $$\forall_{B; A \subset B} \quad M(object,B)=TRUE$$ holds, and if $card(A)>1$ the $$\exists_{B; B \subset A} \quad M(object,B)=TRUE$ holds. The superset consistency means that if a test for larger set of values indicates FALSE then it is not necessary to test its subsets as they will not contribute to our knowledge of the value of X (cost savings). The subset consistency means that if the M-test for a given value set gives true than in end effect at least of its singleton subsets would yield TRUE for the respective M-test. It is clearly the matter of convention: we assume that we can always provide the answer YES or NO, and whenever we are in doubt we still answer YES. Such a convention is not an unusual one: in various legal systems "anything, that is not forbidden by law, is permitted"; in the default logics if a default statement cannot be proven wrong, it is assumed correct. In any case, this means that from the universe of all possible objects, a concrete measurement method selects a population for which its assumptions are satisfied. E.g. if we have a measurement method for measuring pH-values, we surely consider an aqueous sodium solution as a member of our universal population, but never a car as such (because then pH-value has no meaning at all). Furthermore us consider this measurement method a stable one that is whenever the same object is presented, the results are the same. However, let us assume that the measurement method is not completely reliable: it measures only quantities related to the quantity X and not X itself. So it is conceivable that e.g. $M(object,\{v_1\})=TRUE$ and at the same time $M(object,\{v_2\})=TRUE$ though both values of X are deemed to be exclusive. For practical reasons however it may not bother us at all as either the true value of X may not be accessible at all (e.g. the true event of killing or not killing a person by the suspect belongs to the past and can never be recalled as such), may be too expensive to access (e.g. if the most reliable method of checking whether a match can inflame or not it to inflame it, but thereafter it would be useless, so we check only for its color, dryness etc.) or it may be prohibitive to access it for other reasons, e.g. social (sex may be treated with extremely high precision as an exclusive attribute taking values male,female, but we would reluctantly check the primary features before deciding to call someone Mr, Miss or Mrs). Beside this it may prove too expensive to check all the elementary hypotheses (e.g. in the medical diagnosis) so that after stating $M(object,\{v_1\})=TRUE$ we do not bother of other alternatives, that is of the degree of imprecision of the relationship between the measured quantities and the real values of X. We assume that the approximations of X achieved by the measurement method are in most cases sufficient for our decision making (whatever its nature), so we do not insist on closer knowledge of X itself So though we wish $X$ to take singleton values only, we actually live with the fact that for our practical purposes $X$ is possibly set-valued.\\ Let us make at this point some remarks on practical relevance. \begin{Bsp} If we are making statistical tests on equality or non-equality of two quantities (means, variances, distributions), we can purely logically say that the quantities are either equal or not equal but never both. However, the available indirect measurement method (by sampling) may lead to a statement that there is neither evidence to reject equality nor to reject non-equality. So we say that in those cases both equity and inequity holds. We still enjoy statistical inference because in sufficiently many other cases statistics provides us with more precise results \end{Bsp} \begin{Bsp} Similarly if we consider components of a chemical substance, the measurement methods for absence and presence of a component may be different from one another depending whether or not we should be more sensitive to its presence or absence and hence in some cases applying both may lead to apparently contradicting results. \end{Bsp} Let us furthermore assume that with each application of the measurement procedure some costs are connected, increasing roughly with the decreasing size of the tested set A so that we are ready to accept results of previous measurements in the form of pre-labeling of the population. So \begin{df} A {\em label} $L$ of an object $\omega \in \Omega$ is a subset of the domain $\Xi$ of the attribute $X$. \\ A {\em labeling} under the measurement method $M$ is a function $l: \Omega \rightarrow 2^\Xi$ such that for any object $\omega \in \Omega$ either $l(\omega)=\emptyset$ or $M(\omega,l(\omega))=TRUE$.\\ Each {\em labelled object} (under the labeling $l$) consists of a pair $(O_j,L_j)$, $O_j$ - the j$^{th}$ object, $L_j=l(O_j)$ - its label.\\ By a {\em population under the labeling $l$} we understand the predicate $P:\Omega \rightarrow \{TRUE,FALSE\}$ of the form $P(\omega)=TRUE \ iff \ l(\omega) \neq \emptyset$ (or alternatively, the set of objects for which this predicate is true) \\ If for every object of the population the label is equal to $\Xi$ then we talk of an {\em unlabeled population} (under the labeling $l$), otherwise of a {\em pre-labelled} one. \end{df} Let us assume that in practice we apply a modified measurement method $M_l$ being a function: \begin{df} Let $l$ be a labeling under the measurement method $M$. Let us consider the population under this labeling. The modified measurement method $$M_l: \Omega \times 2^\Xi \rightarrow \{TRUE,FALSE\}$$ where $\Omega$ is the set of objects, is is defined as $$M_l(\omega,A)= M(\omega,A \cap l(\omega) )$$ (Notice that $M_l(\omega,A)=FALSE$ whenever $A \cap l(\omega)= \emptyset$.) \end{df} For a labeled object $(O_j,L_j)$ ($O_j$ - proper object, $L_j$ - its label) and a set A of values from the domain of X, the modified measurement method tells us that $X$ takes one of the values in A if and only if it takes in fact a value from intersection of A and $L_j$. Expressed differently, we discard a priori any attribute not in the label Please pay attention also to the fact, that given a population P for which the measurement method $M$ is defined, the labeling $l$ (according to its definition) selects a subset of this population, possibly a proper subset, namely the population P' under this labeling. $P'(\omega)=P(\omega) \land M(\omega,l(\omega))$. Hence also $M_l$ is defined possibly for the "smaller" population P' than $M$ is. \\ \begin{Bsp} In practice, we frequently have to do with pre-labelled population. The statistics of illnesses based on poly-clinical data are based on a population pre-labelled by financial status (whether or not they are ready to visit a physician with less serious disease due to economical background), educational background (whether or not they estimate properly the seriousness of the disease, whether or not they care of symptoms) etc. Similarly in chemical analysis knowledge of substrates pre-labels the tests on composition of the product (not relevant measurements are a priori discarded) etc. \end{Bsp} \begin{Bsp} To continue Citizen Coot example, we may believe that in good shops only moderate and high quality products are available, that is we assign to every shampoo $\omega$ the label $l(\omega)=\emptyset$ (we discard it from our register) if $\omega$ denies our belief that there are no suspicious nor dangerous products in a good shop, and $l(\omega)=\{H,M\}$ if it is moderate or high quality product in a good shop and $l(\omega)=\Xi$ to all the other products. After this rejection of shampoos not fitting our beliefs we have to do with (a bit smaller) sold-shampoos-population from Table \ref{modstatshamp} : \\ \begin{table} \label{modstatshamp} \caption{Modified sold shampoos statistics} \begin{center} \begin{tabular}{r|rrr|r} Quality true for & Shop type B & G & B,G & Total\\ \hlin H & 20 & 112 & 70 & 202 \\ M & 80 & 127 & 110 & 317 \\ S & 65 & 0 & 0 & 65 \\ D & 13 & 0 & 0 & 13 \\ H,S & 15 & 0 & 14 & 29 \\ M,S & 30 & 0 & 25 & 55 \\ H,D & 8 & 0 & 3 & 11 \\ M,D & 15 & 0 & 10 & 25 \\ \hlin total & 246 & 239 & 232 & 717 \\ \end{tabular} \end{center} \end{table} Please notice the following changes: Suspicious and dangerous products encountered in good shops were totally dropped from the statistics (their existence was not revealed to the public). Suspicious and dangerous products from shops with unclear classification (good/bad shops) were declared to come from bad shops. Products from good shops which obtained both the label high quality and dangerous were simply moved into the category high quality products (the bad smelt was just concealed) etc. This is frequently the sense in which our beliefs have impact on our attitude towards real facts and we will see below that the Dempster-Shafer Theory reflects such a view of beliefs. \end{Bsp} Let us now define the following function: \begin{df} $$Bel_P ^{M}(A)=\Prob{O}{P}(\lnot M(O,\Xi-A))$$ which is the probability that the test M, while being true for A, rejects every hypothesis of the form X=$v_i$ for every $v_i$ not in A for the population P. We shall call this function "the belief exactly in the the result of measurement". \end{df} Let us define also the function: \begin{df} $$Pl_P ^{M}(A)=\Prob{O}{P}( M(O,A))$$ which is the probability of the test M holding for A for the population P. Let us refer to this function as the "Plausibility of taking any value from the set A". \end{df} Last not least be defined the function: \begin{df} $$m_P ^{M}(A)=\Prob{O}{P}( \bigwedge_{B;B=\{v_i\}\subseteq A} M(O,B) \land \bigwedge_{B;B=\{v_i\}\subseteq \Xi-A} \lnot M(O,B) ) $$ which is the probability that all the tests for the singleton subsets of A are true and those outside of A are false for the population P. \end{df} Let us illustrate the above concepts with Citizen Coot example: \begin{Bsp} \label{nonlabelEx} For the belief function for sold-bottles-population and the measurement function $M ^3$, if we identify probability with relative frequency, we have the focal points given in the Table \ref{nonlabtable} \end{Bsp} \begin{table} \label{nonlabtable} \caption{Mass and Belief Function under Measurement Method $M ^3$} \begin{center} \begin{tabular}{|l|r|r|r|} \hline Set &$m_P ^{M ^3}$& $Bel_P ^{M ^3}$\\ \hline \{(H,B) \} & 20/723 & 20/723 \\ \{(H,G) \} & 100/723 & 100/723 \\ \{(H,B),(H,G) \} & 70/723 & 190/723 \\ \{(M,B) \} & 80/723 & 80/723 \\ \{(M,G) \} & 100/723 & 100/723 \\ \{(M,B),(M,G) \} & 110/723 & 290/723 \\ \{(S,B) \} & 50/723 & 50/723 \\ \{(S,G) \} & 5/723 & 5/723 \\ \{(S,B),(S,G) \} & 15/723 & 70/723 \\ \{(D,B) \} & 10/723 & 10/723 \\ \{(D,G) \} & 1/723 & 1/723 \\ \{(D,B),(D,G) \} & 3/723 & 14/723 \\ \{(H,B),(S,B) \} & 15/723 & 85/723 \\ \{(H,G),(S,G) \} & 10/723 & 115/723 \\ \{(H,B),(S,B),(H,G),(S,G) \} & 14/723 & 299/723 \\ \{(M,B),(S,B) \} & 30/723 & 160/723 \\ \{(M,G),(S,G) \} & 20/723 & 125/723 \\ \{(M,B),(S,B),(M,G),(S,G) \} & 25/723 & 435/723 \\ \{(H,B),(D,B) \} & 8/723 & 38/723 \\ \{(H,G),(D,G) \} & 2/723 & 103/723 \\ \{(H,B),(D,B),(H,G),(D,G) \} & 3/723 & 217/723 \\ \{(M,B),(D,B) \} & 15/723 & 105/723 \\ \{(M,G),(D,G) \} & 7/723 & 108/723 \\ \{(M,B),(D,B),(M,G),(D,G) \} & 10/723 & 336/723 \\ \hline \end{tabular} \end{center} \end{table} It is easily seen that: \input{dsttheorems.tex} \input{dstende.tex} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
461
\section{Introduction} It is now generally accepted that collisions and mergers between gas-rich galaxies often generate intense star-formation activity and associated strong infrared emission \citep[e.g.,][]{Joseph1985, Soifer1987, Soifer1991}. Ultra Luminous Infrared Galaxies, ULIRGs (L$_\mathrm{IR}$ $>$ 10$^{12}$L$_{\odot}$) and LIRGS (10$^{12}$ $>$ L$_\mathrm{IR}$/L$_{\odot} >$ 10$^{11}$) frequently involve mergers or interactions of gas-rich galaxy pairs, with the likelihood of them being associated with a major merger increasing with infrared (IR) luminosity \citep{Armus1987, Sanders1988a, Sanders1988b, Sanders1996, Elbaz2002, Armus2009}. While it is clear that major mergers play an important role in generating high IR luminosities in the local universe, their role at higher redshift is still being explored. Shocks and turbulence potentially play a role in changing the conditions of the gas in collisional galaxies, not always leading to enhancements in star formation. In the local Hickson Compact Groups, \citet{Alatalo2015} found evidence that multiple collisions can quench or significantly suppress star formation in some systems where turbulence and shocks are present \citep[see also][]{Lisenfeld2017}. These galaxies had previously been found to contain large volumes of warm molecular hydrogen that emit their energy mainly in the mid-IR, and were believed to be shock-heated \citep{Cluver2013}. An extreme example is found in the Stephan's Quintet system. Here, a large filament of molecular gas is found in the intergalactic medium in which a large fraction of the gas is warm and in a shock-heated phase \citep{Appleton2006, Guillard2009, Cluver2010, Appleton2017}. Shocks, though hard to detect in LIRGs and ULIRGs because of the dominant effects of star formation on optical emission-line diagnostics, are being increasingly detected with the advent of spatially resolved optical integral field unit (IFU) spectroscopy \citep{Rich2011, Rich2014, Rich2015}. How large-scale shocks and turbulence affect the star formation in such galaxies, and how important this process is in higher-redshift systems is currently unknown. An interesting example of an ongoing major merger that may be caught in a highly disturbed state is the Taffy galaxy pair UGC 12915/4 (hereafter Taffy-N and Taffy-S for simplicity). Despite having recently undergone a strong head-on collision, the Taffy system appears surprisingly normal in its IR properties, with a total L$_\mathrm{IR}$ = 4.5 $\times$ 10$^{10}$ L$_{\odot}$ summed over the whole system based on multi-wavelength {\it Spitzer} and {\it Herschel} SED photometric fitting \citet{Appleton2015}; \citep[see also][]{Jarrett1999, Sanders2003}. The reason that the system is so normal in the IR, despite its recent violent history, is not known. It may be that we are catching the Taffy system in a peculiar moment where most of its gas is so disturbed that it cannot yet generate significant star formation. If so, studying the conditions of the gas in between the galaxies (referred to as the bridge) may well yield interesting insight into how shocks and turbulence can inhibit star formation in violently colliding galaxies. The Taffy galaxies were named for the discovery of a bridge of radio continuum emission, stretching, like salt-water taffy (candy), between the galaxies \citep{Condon1993}. Evidence suggests that the two galaxies collided 25-30 Myr ago, allowing their stellar components to pass through each other, but stripping $\sim$ 7 x 10$^9$ M$_{\odot}$ of molecular and atomic gas into a bridge between them \citep{Braine2003, Gao2003, Zhu2007}. There is more gas in the bridge than in the two galaxies combined. The bridge appears to be strongly disturbed (and probably turbulent), based on kinematically-broad CO line studies of the bridge, and strong mid-IR H$_2$ emission and [CII]157.7$\mu$m lines suggestive of shocks \citep{Peterson2012, Peterson2018}. Despite its high gas mass, the average star formation rate (SFR) in the entire bridge through SED fitting is quite low, $\sim$0.45 M$_{\odot}$ yr$^{-1}$, excluding the prominent extragalactic HII region seen south-west of UGC 12915, which was separately found to have a SFR of 0.24 M$_{\odot}$ yr$^{-1}$ \citep{Appleton2015}. Numerical models of such a head-on collision between two gas rich galaxies \citep[e.g.,][]{Struck1997} and a detailed model of the Taffy system \citep{Vollmer2012} provide strong support for the idea that the gas left behind in the center of mass frame of the collision would be highly turbulent, and that some would be strongly shock heated. \citet{Appleton2015} detected faint extended soft X-ray emission, and several compact point X-ray sources in the bridge, the former being consistent with shock-heated gas that has not had time to completely cool since the collision occurred. Finally, \citet{Lisenfeld2010} concluded that the radio emission in the bridge could be explained in terms of cosmic rays accelerated in magnetic fields compressed in shocks. Although the Taffy galaxies have been studied quite extensively at longer wavelengths \citep{Condon1993, Jarrett1999}, very little work has been done at visible or near-IR wavelengths. \citet{Bushouse1987} presented early digital video camera observations which showed H$\alpha$ emission from the inner disks of both galaxies and emission from the extragalactic HII regions in the bridge. Pa$\alpha$ observations from the ground were also made by \citet{Komugi2012}. The galaxies show strong disturbances in their optical structure, including rings and loops, and the possible recent onset of star formation in the bridge, including at least one prominent extragalactic HII region, and fainter clusters--some of which are seen in archival NICMOS observations from HST (Appleton et al.\ in preparation). This paper represents the first major study of the ionized gas phase in the Taffy system and bridge. We provide, for the first time, a detailed exploration of both the kinematics and excitation properties of the optical emission line gas in the Taffy system. The paper is organized as follows: \S\ref{sec:data_methods} describes the observations and methods used in the paper. We describe the fitting process used on the double line profiles and the gas kinematics through H$\alpha$ channel maps and velocity field moment maps in \S\ref{sec:double_profiles} and \S\ref{sec:kinematics}, respectively. The effects of dust extinction on the measured line fluxes are discussed in \S\ref{sec:dust}. We describe the results from our line diagnostic diagrams in \S\ref{sec:bpt_results}. In \S\ref{sec:frac_from_sf}, we discuss our results on the properties of the ionized gas and its excitation mechanisms through the use of emission-line ratio diagnostic diagrams and comparison with shock models. In \S\ref{sec:frac_from_sf} and \S\ref{sec:sfr} we discuss our estimates of the ionized gas fraction from star formation and the SFR in the system. We discuss the evidence for a post-starburst population in the underlying starlight in \S\ref{sec:hbeta_ew_results}. In \S\ref{sec:conclusions} we present our conclusions. We assume a comoving distance to the galaxies of 62 Mpc based on a mean heliocentric velocity for the system of 4350 km~s$^{-1}$, and a Hubble constant of 70 km~s$^{-1}$ Mpc$^{-1}$. \section{Observations, Data Reduction, and Analysis Methods} \label{sec:data_methods} The IFU data presented in this work were obtained with the VIRUS-P Spectrograph at McDonald Observatory \citep{Hill2008,Blanc2010}. VIRUS-P is the Visible Integral-field Replicable Unit Spectrograph prototype (now called the George and Cynthia Mitchell Spectrograph, GCMS) mounted on the 2.7 m Harlan J.\ Smith telescope. The IFU has 246 fibers (each fiber has an angular diameter of $4\farcs16$ on the sky) with a $\frac{1}{3}$ filling factor. We used several cycles of a 3-point dither pattern to completely cover the 2.8 sq.\ arcminute field of view. We used the gratings VP2 and VP4 for our blue and red channel spectra respectively. VP2 and VP4 have a spectral resolution of 1.6 and 1.5~\AA\, and covered a range of 4700--5350~\AA\ and 6200--6850~\AA, respectively. This spectral resolution corresponds to a velocity resolution of $\sim$100~km~s$^{-1}$ and $\sim$70~km~s$^{-1}$ at the wavelengths of H$\beta$ in VP2 and H$\alpha$ in VP4, respectively. We made observations of the Taffy galaxies on 2012 Jan 31 and Feb 2 (blue spectrometer) and 2012 Feb 01 (red spectrometer) with a total exposure time per dither position of 2200s (blue) and 1200s (red). Conditions were photometrically good at the time of the observations with moderate seeing of 1.8-2.5 arcseconds (less than the diameter of a fiber). These data were processed using the VACCINE pipeline which identified and traced each fiber on the CCD chip, and performed bias, flat-field and wavelength calibration (based on lamp spectra) on a fiber-by-fiber basis for the science frames. VACCINE is a Fortran-based reduction package developed for the HETDEX Pilot Survey \citep{Adams2011} and the VENGA project \citep{Blanc2010}. Cosmic-ray removal was then performed using the IDL routine LA-Cosmic \citep{vanDokkum2001}. \subsection{Flux Calibration and Cube Building} \label{sec:data_reduction} The flux calibration and pointing refinements and final cube construction of these data was performed in three steps following the methods described by \citet{Blanc2013}: i) Relative spectrophotometric calibration was performed which is applied to all the fibers. Observations were made in a 6-point dither pattern (including several fibers) of the standard star Hz 15 (HIP 21776). An algorithm was used to solve for the position of the star on the fibers and determine the spectrophotometric transformation from native (ADU) units across the spectrum to units of erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$. The results were applied to all the fibers irrespective of throughput. This step resulted in a relative flux uncertainty across the band of $\sim$8$\%$; ii) astrometry and absolute flux calibration, using a bootstrapping method, was used to effectively cross-correlate a reconstructed image of the galaxy derived by integrating the light from each fiber, with a calibrated images of Taffy from the SDSS \citep{York2000} in the g- and r-band, suitably convolved to the resolution of the VIRUS-P fiber system. This helped refine the astrometry, and the assembly of the final cube from the individual observations of each field. The cross-correlation also allowed the spectrum in each fiber to be absolutely scaled to the SDSS band in question. The details of this procedure are given in \citet{Blanc2013}. Tests performed in that paper show that the absolute spectrophotometric flux calibration has a typical accuracy of 15-30$\%$, after taking into account the uncertainties in SDSS calibration and the VIRUS-P relative spectrophotometric accuracy; iii) a final flux-calibrated 3-d spectral cube was created by combining all the various observational pointing frames into a single interpolated cube with resulting 2 x 2 arcsec$^2$ spaxels ($\sim$0.3 kpc arcsec$^{-1}$ based on the assumed distance of 62 Mpc). These processes were repeated for the red and blue channels, creating final flux-calibrated blue and red spectral cubes. \subsection{Spectral Mapping, continuum and emission-line fitting} \label{sec:data_processing} The processing and extraction of astrophysical information from the data cubes was done using a combination of IRAF/PyRAF, IDL and Python routines. Before beginning our analysis we smoothed the data cubes spatially, but \emph{not} in the spectral direction, using a Gaussian kernel with a standard deviation of 1.47 pixels. This was done to boost the signal-to-noise ratio in areas that we were interested in; particularly the Taffy bridge region which has relatively low signal-to-noise compared to the galaxies. This spatial smoothing effectively reduces the noise in spectra from individual spaxels by a factor of $\sim$2. We used these spatially smoothed cubes for all of the analysis done in this work. For fitting each individual spaxel in the IFU data we used the IDL software toolkit LZIFU (LaZy-IFU) \citep{Ho2016}. LZIFU automates fitting multiple emission lines superimposed on a continuum for multiple spaxels in each channel and provides 2D maps of continuum and line fluxes, velocities and velocity dispersion. It is capable of fitting emission lines that are superimposed on deep absorption features and also emission lines with multiple velocity components. Figures \ref{fig:lzifu_fit_hbeta_abs} and \ref{fig:lzifu_fit_vel_comp} show our fitting results for two individual spaxels that show these features in their spectra. Figure \ref{fig:lzifu_fit_hbeta_abs} shows a spaxel that has strong H$\beta$ absorption and H$\beta$ emission superimposed on the absorption trough. This spaxel lies very close to the center of Taffy-N. This absorption must be accounted for with the continuum fitting to get accurate emission line fluxes as well as an accurate line profile. The emission lines also show evidence for double line profiles in many positions across the system. As an example, Figure \ref{fig:lzifu_fit_vel_comp} shows a spaxel which lies close to the edge of the extragalactic HII region which clearly shows two separate velocity components. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{Fig1_revised.pdf} \caption{Our LZIFU fitting results for a spaxel that has deep H$\beta$ absorption and H$\beta$ emission superimposed on the absorption trough. The top left and right panels show the blue and red data respectively, along with their model fits. The bottom panels show the corresponding residuals from the fitting. The gray line shows the raw data from the spaxel and the blue and red lines show the model fits to the respective channels. Note that the Figure does not show the full wavelength coverage of the data but instead is focused on showing the relevant absorption and emission features i.e. H$\beta$ and the [OIII]$\lambda\lambda$4959,5007 doublet in the blue channel and H$\alpha$ and its neighboring [NII]$\lambda\lambda$6548,6583 doublet lines in the red channel.} \label{fig:lzifu_fit_hbeta_abs} \end{figure} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{Fig2_revised.pdf} \caption{Same as Figure \ref{fig:lzifu_fit_hbeta_abs} but now showing the LZIFU fitting results for another spaxel which displays distinct velocity components more clearly.} \label{fig:lzifu_fit_vel_comp} \end{figure} LZIFU works by first fitting the continuum with a custom implementation of the PPXF code \citep{Capellari2004} within LZIFU and then fitting the emission lines after subtracting the continuum model. The continuum fitting uses a set of model templates that are fit to a combined blue+red channel spectrum. LZIFU also accounts for systematic errors in the models and non-stellar contributions to the continuum data by fitting a multiplicative polynomial simultaneously with the continuum models. The emission lines along with residuals from sky line subtraction are masked during the continuum fitting process. We also specified the following lines to be fit (and masked during continuum fitting) - H$\beta$, the [OIII]$\lambda\lambda$4959,5007 doublet, [OI]$\lambda$6300, [OI]$\lambda$6364, H$\alpha$, the [NII]$\lambda\lambda$6548,6583 doublet, and the [SII]$\lambda\lambda$6716,6731 doublet. The extinction corrected emission line fluxes for all the lines detected in different regions of the Taffy system (as defined in Figure \ref{fig:Fig3}) are tabulated in Table \ref{tab:flux_table}. We ran LZIFU on the entire IFU data cube for the Taffy galaxies which is $58\times58$ and $59\times59$ spaxels with 2227 and 2350 spectral elements for the blue and red channels respectively. The process of fitting the full cube was not straightforward for several reasons relating to the peculiar kinematics of the Taffy system. The LZIFU software was designed to work best with a galaxy showing slowly-varying velocity centroids relatively close to the initial guess for the velocity of the system. In the Taffy system, the velocity range of the emission lines over the whole system was large, with the emission lines sometimes exhibiting complex behavior, in addition to occasionally being observed superimposed on deep Balmer absorption lines. Furthermore, in a large number of spaxels, we found multiple velocity components which did not always move together as a function of position. As a result, a single set of initial guesses for the various starting parameters did not work for the whole cube, but had to be adjusted spatially to achieve good fits, especially in specific regions showing double line profiles. After an iterative process of fitting with different starting parameters tuned to particular regions, we were able to get a consistent set of smoothly varying results across the whole system. We provide the most relevant LZIFU parameters and their starting guesses in Table \ref{tab:lzifu_params}. \section{Results} \subsection{Emission-line gas and H$\beta$ Absorption within the System} \label{sec:double_profiles} Previous observations of the Taffy system in the visible wavelength range had reported the detection of ionized gas within the galaxies and the extragalactic HII region \citep{Bushouse1986, Bushouse1987}. Because of the sensitivity of the VIRUS-P instrument to very faint diffuse emission, we report the presence of ionized gas throughout the Taffy system -- both within the galaxies, the so-called extragalactic HII region, and also \emph{in the bridge between the galaxies}. We detect emission from many lines, including H$\alpha$, H$\beta$, the [OIII]$\lambda\lambda$4959,5007 doublet, the [NII]$\lambda\lambda$6548,6583 doublet, and the [SII]$\lambda\lambda$6716,6731 doublet lines (see Figures \ref{fig:lzifu_fit_hbeta_abs} and \ref{fig:lzifu_fit_vel_comp}). We also detect strong emission from the atomic oxygen line [OI]$\lambda$6300 and sometimes the weaker [OI]$\lambda$6364 line in the galaxies and the extragalactic HII region in the bridge. Although emission lines dominate in many of the locations across the system, in some cases H$\beta$ emission is observed superimposed on a broad absorption trough indicative of a post-starburst population (as seen in Figure \ref{fig:lzifu_fit_hbeta_abs}). In Figure \ref{fig:Fig3} we present some extracted spectra in several places in the system to provide an overview of the complexity of the kinematics in this system. The spectra show expanded views of the [OIII]$\lambda$5007 and H$\alpha$ lines along the major axis of both galaxies and a sampling of the bridge. As with the previous HI investigations of Taffy by \citet{Condon1993, Braine2003}, the ionized gas spectra along the major axis of Taffy-N (N1-N5) show clear rotation from low to high velocities as one proceeds northwards, whereas in Taffy-S (S1-S5), the rotation is also obvious, but in the opposite sense. This confirms the suggestion that the galaxies were counter-rotating when they collided. Many of the line profiles are complex, and contain multiple components. Of special note are the broad lines in the bridge (especially B1 and B2) as well as complex multi-component structures in the north-west of both galaxies (N4, N5, S4 and S5). The nucleus of Taffy-S (S2) also shows very broad strong [OIII] emission, and weaker broad wings in H$\alpha$ (especially when a correction is made for H$\alpha$ absorption--see \S4.2). We also show polygons marking additional regions of interest on the SDSS image in Figure \ref{fig:Fig3}. These polygons show regions which are investigated in the emission-line diagnostic diagrams in \S\ref{sec:bpt_results}. In the emission-line diagnostic diagrams, we investigate the excitation mechanisms for the western regions of Taffy-N and the bridge separately because these regions exhibit peculiar kinematics distinct from the rest of Taffy-N and the bridge (see the detailed discussion in \S\ref{sec:vel_map} and \S\ref{sec:methods_vel_comp}). \begin{figure*} \centering \includegraphics[width=\textwidth]{Fig3_revised.pdf} \caption{Some examples of integrated VIRUS-P spectra from the Taffy system expanded to emphasize the kinematics. The galaxies UGC 12915/4 referred to as Taffy-N and Taffy-S respectively for clarity in the text are shown in this SDSS i-band image. Spectra are shown extracted from the regions of the black and white colored circles. The blue and red spectra correspond to the [OIII]5007 and H$\alpha$ lines. The flux axis is in units of $10^{-18}\, \mathrm{erg\, s^{-1}\, cm^{-2}\, \AA^{-1}}$. The polygon regions refer to the regions that are color coded, using the same colors, in the later discussion of the emission-line diagnostic diagrams (with the exception of the western part of Taffy-N which is shown as green pluses in the line diagnostic diagrams). We denote Taffy-N, the western part of Taffy-N, Taffy-S, the eastern bridge, and the western bridge by green, magenta, blue, red, and orange colored polygons respectively. The nucleus of Taffy-S is also denoted by a small blue polygon. Our justification for selecting these regions is discussed in the text. The region shown here as B1 is centered on a faint extragalactic HII region discussed in the text. The gray dashed line denotes the systemic recessional velocity for the Taffy pair at 4350 km/s.} \label{fig:Fig3} \end{figure*} Figure \ref{fig:halpha_sdss}a shows total H$\alpha$ emission contours overlaid on a SDSS i-band image. The two galaxies and the area near the extragalactic HII region can clearly be distinguished as regions with the brightest H$\alpha$ emission, with extended emission spread between the galaxies. For Taffy-S, the brightest H$\alpha$ lies on either side of the nucleus with fainter emission from the direction of the nucleus. It is noticeable that there is no obvious ionized gas associated with the faint southern part of the outer ring in Taffy-S, although there may be some associated with the northern part of the ring between the galaxies. Taffy-N has a very different distribution of ionized gas, with a strong concentration in the inner disk, and fainter emission extending along the north-west major axis where it appears to join with bridge material. \begin{figure*} \centering \includegraphics[width=0.97\textwidth]{Fig4ab_revised.pdf} \caption{(a) Integrated H$\alpha$ emission contours from the IFU data overlaid on an SDSS i-band image. The contour levels are 30, 50, 100, 200, 300, 400, 600, 800, 1000, and 1200, in units of $10^{-18}$ ${\rm erg\, s^{-1}\, cm^{-2}}$. The lowest contour level corresponds to an approximately 2$\sigma$ detection. The red rectangle shows the IFU coverage. The H$\alpha$ emission was summed over the two velocity components in spaxels where there were double profiles. (b) The measured H$\beta$ absorption-line equivalent width (EW) in Angstroms across the Taffy system based on the ppxf fitting of the absorption lines and continuum.} \label{fig:halpha_sdss} \end{figure*} \begin{figure} \centering \plotone{Fig5_revised.pdf} \caption{Map of visual extinction, A$_\mathrm{V}$, with superimposed CO (1-0) (black) contours from \citet{Gao2003}. The A$_\mathrm{V}$ map was created as per the calculation described in \S\ref{sec:dust}. The contour levels are 35, 40, 45, 50, 65, 75, 100, 120, 140, and 160, in units of [Jy km~s$^{-1}$ beam$^{-1}$].} \label{fig:av_map} \end{figure} Figure \ref{fig:halpha_sdss}b shows a map of the equivalent width of the H$\beta$ absorption across the system. In contrast to the lack of emission lines in the southern ring of Taffy-S, strong stellar absorption is seen there, along the extreme north-westerly edge of the galaxy, and in an extended region north-east of the faint stellar ring. In Taffy-N the strongest absorption is seen at the south-eastern end of the major axis in the region outside the main body of H$\alpha$ emission, although it extends at lower equivalent width as far as the center of the galaxy. We will discuss the implications of this H$\beta$ absorption in \S\ref{sec:hbeta_ew_results}. \subsection{Dust extinction from the Balmer decrement} \label{sec:dust} We estimate the extinction caused by dust by examining the line ratio of the Balmer lines, H$\alpha$ and H$\beta$, referred to as the Balmer decrement, and assume Case B recombination. The color excess $\mathrm{E(B-V)}$ is given by, \begin{equation} \label{eq:color_excess} \mathrm{E(B-V) = 1.97\, log_{10}\left(\frac{(H\alpha/H\beta)_{obs}}{2.86}\right)}. \end{equation} The extinction at wavelength $\lambda$ is related to the color excess by, \begin{equation} \label{eq:extinction} \mathrm{A_{\lambda} = k(\lambda)\, E(B-V)}. \end{equation} We assume a reddening curve $\mathrm{k(\lambda)}$ of the form given by \citet{Calzetti2000}. Adopting the same method for all the observed lines allows us to correct, spaxel by spaxel, the observed line fluxes for dust extinction to arrive at intrinsic line fluxes. \begin{deluxetable*}{ c | c c c c c c c c } \tablecaption{Visual dust extinction corrected emission line fluxes divided by region [10$^{-16}$ W/m$^2$]. \label{tab:flux_table}} \tablehead{ Region\tablenotemark{a} & A$_\mathrm{V}$ & H$\beta$ & $\mathrm{[OIII]\lambda5007}$ & $\mathrm{[OI]\lambda6300}$ & H$\alpha$ & $\mathrm{[NII]\lambda6583}$ & $\mathrm{[SII]\lambda6716}$ & $\mathrm{[SII]\lambda6731}$ } \startdata Taffy-N & 2.29 & 3.2 $\pm$ 0.23 & 1.98 $\pm$ 0.2 & 0.59 $\pm$ 0.16 & 9.12 $\pm$ 0.16 & 4.34 $\pm$ 0.14 & 2.24 $\pm$ 0.14 & 1.35 $\pm$ 0.16 \\ Taffy-S & 0.94 & 1.67 $\pm$ 0.13 & 0.67 $\pm$ 0.11 & 0.39 $\pm$ 0.11 & 4.78 $\pm$ 0.11 & 2.70 $\pm$ 0.1 & 1.38 $\pm$ 0.1 & 1.18 $\pm$ 0.12 \\ Bridge & 1.89 & 1.42 $\pm$ 0.19 & 1.48 $\pm$ 0.16 & 0.55 $\pm$ 0.14 & 4.05 $\pm$ 0.13 & 1.89 $\pm$ 0.11 & 1.25 $\pm$ 0.11 & 0.81 $\pm$ 0.13 \\ \enddata \tablenotetext{a}{Fluxes quoted here are from the combined Taffy-N and bridge regions i.e., including the western parts defined in Figure \ref{fig:Fig3}.} \end{deluxetable*} A map of the dust extinction at visual wavelengths, $\mathrm{A_V}$, derived from the Balmer decrement is shown in Figure \ref{fig:av_map} and superimposed on the map are CO (1-0) contours from \citet{Gao2003} using data from the Berkeley-Illinois-Maryland-Association (BIMA) interferometer. It can be seen that the dust extinction map follows reasonably well the CO (1-0) surface density map, with a peak at the center of Taffy-N (as expected since the H$_2$ surface density is very high there). A high value of extinction is also seen at the southern tip of Taffy-S. This extends beyond the CO column density contours, but we note that the VLA maps of HI in this region by \citet{Condon1993} show a peak column density there of 1.8 $\times$ 10$^{21}$ cm$^{-2}$, which would imply an $\mathrm{A_V}$ of $\sim$~1 ~mag integrated over a 18 x 18 arcsec$^2$ beam \citep{Guver2009}. Interestingly, we observe a low value of $\mathrm{A_V}$ at the center of the Taffy-S, and along the northern edge of the spiral arm of that galaxy. The latter result is consistent with a low H$_2$ column there, although the low $\mathrm{A_V}$ in the nucleus may imply different conditions in the gas excitation (deviations from the assumed Case B recombination--perhaps due to a low-luminosity AGN) or a significantly reduced dust to gas ratio there. Given that dust opacity measurements depend strongly on galaxy inclination \citep[see e.g.,][]{Driver2007, Unterborn2008}, we caution that our dust extinction estimates for both galaxies should be treated as lower limits and that actual A$_{\rm V}$ values could be much higher. For example, \citet{Gao2003} estimate A$_{\rm V}$ values could be higher than 10 mag for both galaxies. Our low A$_{\rm V}$ values for the Taffy galaxies could be because measuring the dust extinction from the H$\alpha$ and H$\beta$ lines involved in the Balmer decrement effectively only probes the effects of dust superficially \citep[essentially a ``skin'' effect; e.g.,][]{Calzetti2001}. It also assumes a simplistic dust geometry -- a screen of dust between the observer and the Balmer line emitting regions. Such an assumption might not be true for the complicated kinematics within the post collision Taffy system. \section{The Kinematics of the Taffy System} \label{sec:kinematics} As Figure \ref{fig:Fig3} has shown, the spectra are quite complex in the system, and so we present the kinematic results in two ways. Firstly, we present the channel maps of one of the lines (in this case H$\alpha$) to provide a large-scale view of the gas distribution as a function of radial velocity channel. Secondly, we explore the spatial distribution of the gas associated with different kinematic components, especially those associated with multiple lines. \subsection{H$\alpha$ Channel Maps} \label{sec:vel_map} Figure \ref{fig:vel_channel_map} shows the H$\alpha$ emission channel maps integrated over channels of width 70~km~s$^{-1}$. It is well known that the two galaxies are counter-rotating \citep[e.g.\ ][]{Vollmer2012}, and the brightest ionized gas in the two systems reflects this. Ionized gas in Taffy-S is seen in the lowest velocity channel (3806-3876~km~s$^{-1}$) in the north-west on the inside edge of the faint stellar ring, and progresses in a south-easterly direction with increasing velocity eventually showing a major component of emission in the south-east disk which fades away around 4780~km~s$^{-1}$. Taffy-S also has some peculiar kinematics. For example, in the NW part of the disk, faint gas emission is seen over a wide range of velocities along the northern major axis even at the highest velocities. This would not be expected for gas in normal rotation. Taffy-N is even more peculiar. The main centroid of emission from the south-east disk appears at around 4016-4086~km~s$^{-1}$ and progresses steadily towards the north-west, showing the counter-rotation. In addition, there is a peculiar region of emission which appears at even lower velocities on the north-west extreme tip of Taffy-N, and cannot be part of the normal rotation of the galaxy. Indeed, that structure appears to be part of the bridge, since as velocities increase it becomes more extended and eventually connects to the north-western region of Taffy-S. In addition to this bridge feature, a second bridge component starts to appear between the galaxies at velocities of 4000~km~s$^{-1}$, and at higher radial velocities it becomes quite strong in the region of the extragalactic HII region. The emission bridges the two galaxies where it joins with emission that potentially is associated with the faint stellar ring in the north-eastern part of Taffy-S. The connection between the galaxies disappears at velocities in excess of 4650 km~s$^{-1}$. The faint stellar ring in the northern half of Taffy-S exhibits some peculiar emission. Features that can be associated with this ring can be seen most clearly appearing from velocities around 4000~km~s$^{-1}$. Moving to higher velocities shows several clumps that appear to follow the ring from NW to SE. These clumps also appear to be surrounded by emission that blends with the emission from the bridge. These features hint that this ring was strongly influenced by the collision and shows a discernible transition from material that is clearly associated with Taffy-S to material clearly associated with the bridge. A similar argument could also apply to Taffy-N although such a transition is much harder to observe in Taffy-N which is highly inclined. It is clear that the velocity structure of the gas in both galaxies and the bridge is very complex, and so we will now explore the gas in terms of its spectral profiles--which allows us to more easily separate normal regular rotation in the galaxies from peculiar motions. \begin{figure*} \centering \includegraphics[width=\textwidth]{Fig6_revised.pdf} \caption{Velocity channel map showing the H$\alpha$ emission over -540 to +510 $\mathrm{km\, s^{-1}}$ with respect to the (optically defined) systemic recessional velocity of $\sim4350\ \mathrm{km\, s^{-1}}$. The contours have been overlaid on a SDSS i-band image of the Taffy galaxies. The color of the contours, going from white to blue, indicates the intensity of the H$\alpha$ emission going from low to high intensity. The contour levels are 1, 4, 10, 20, 40, 80, 105, and 135 in units of ${\rm erg\, s^{-1}\, cm^{-2} / (70\, km\, s^{-1})}$. Note that every contour level is not visible in each panel because the H$\alpha$ intensities change for each panel. The velocity range shown for each panel (heliocentric velocity) is $70\ \mathrm{km\, s^{-1}}$. The LZIFU emission line cube was stitched as described in \S\ref{sec:methods_vel_comp}. The scale bar in the top left-hand panel is 1 arcmin (18 kpc for D = 62 Mpc) in length.} \label{fig:vel_channel_map} \end{figure*} \subsection{Mapping Two-component Line Profiles in the System} \label{sec:methods_vel_comp} As we have shown previously, there are regions in the full data cube where the emission-line spectra show more than one component. Similar, double line-profiles were noticed in both the HI \citep{Condon1993} spectra, and in spectra taken in the far-IR [\ion{C}{2}] and [\ion{O}{1}] lines with {\it Herschel} \citep{Peterson2018}. Some regions of the bridge also show two components in the CO 1-0 molecular gas observations of \citet{Gao2003}. Our observations (consistent with those of \citealt{Gao2003, Peterson2018}) show that the double-line profiles in the ionized gas are not just confined to the bridge, but are also seen projected against parts of the galaxy disks, especially Taffy-N. To explore the kinematics further, we performed line fitting in two distinct passes. Firstly we ran LZIFU, forcing it to fit only one component across the whole system. This worked well in regions where the lines were single-valued, but produced poor results in regions where the lines were double-profiled. The output from LZIFU at this stage was a model data cube built from the model fits, as well as the best fitting continuum cube. Also included were additional data products, including integrated line maps for each of the lines fitted. Secondly, we ran LZIFU again, but this time we required it to fit two components for everything. In this case, the fitting worked well for the case of two components, but we found that it performed poorly in places where the profiles were singular. In this mode, since the software was fitting two separate line profiles, the output included two sets of data products (model line cubes, integrated line and velocity field maps), one set for each component--a low and high-velocity component. We were able to interrogate the model results to determine, spaxel by spaxel, the mean, standard deviation, and amplitudes for each of the single and two component fits. This allowed us to divide the results into three classes of kinematic behavior: \vspace{0.4cm} \begin{enumerate} \item{Spectra consistent with a single Gaussian component, V$_s$ } \item{Spectra consistent with two components, V$_1$ and V$_2$, at different velocities (V$_1$ represents the lowest velocity component, and V$_2$ the highest).} \item{Spectra consistent with two components with nearly the same mean velocity, but exhibiting both a narrow V$_{1n}$ and broad V$_{2b}$ component.} \end{enumerate} \vspace{0.2cm} To qualify as two components, the two Gaussian line centers, V$_1$ and V$_2$, were required to differ by at least 35 km~s$^{-1}$, or one-half of the velocity resolution at H$\alpha$ wavelengths. In practice, the lines were generally further apart than this and clearly separated. Similarly, for a second component to be considered broad, V$_{2b}$ must have a FWHM of at least 1.5$\times$ that of V$_{1n}$. In a few cases, the LZIFU modeling failed to fit two components to cases where a single line profile would have been more appropriate. In these small number of cases, we inspected the profiles by eye to perform what we considered to be the most reasonable classification. In general when there was doubt about the classification, we inspected the profiles by eye to confirm the classification. Overall the separation between single and two-components seemed reasonable, although we realize that there may be some areas where the distinction is somewhat subjective. Less subjective methods of deciding whether to fit single or multiple components to optical IFU data have been explored by \citet{Hampton2017} using Artificial Neural Networks. These methods, which currently rely on training sets using "expert" astronomer guidance, are encouraging for the future, especially since such methods can classify velocity profiles faster than a human, and with statistically similar outcomes. As IFU data becomes more common, and as the amount of data increases with time, such methods may eventually be needed to replace human classifications. In our paper, in most cases, the classification of a two-component versus single Gaussian component was relatively unambiguous. In order to explore the relationship between the regions of emission where a single component is most appropriate compared with a region with two components, we have created composite moment maps (intensity, mean velocity and velocity dispersion) by {\it combining} those positions consistent with a single profile V$_s$ with those consistent with one or other of the double profile cases. The reason we combined the single component data with the two velocity components separately, was to look for continuity between the single component gas and one or other of the double-lines. For example, if the single component data mapped smoothly into velocity field of one of the two double components, this might suggest they are really part of one single dynamical system, whereas a sudden discontinuity would suggest no such regularity. These "merged" single and double profile moment maps are shown in Figure \ref{fig:vel_moments}. Figures \ref{fig:vel_moments}~a,b and c represent moment maps created by combining spaxels containing components V$_s$ with V$_1$ and V$_{1n}$, whereas Figures \ref{fig:vel_moments}~d,e and f, represent the combination of spaxels containing V$_s$ with V$_2$ and V$_{2b}$. To make it clear where the different kinds of profiles fall in the maps, we indicated in the Figure those regions enclosed within the red polygon that are consistent with Class-2 above (two components at different velocities). Those regions of the maps consistent with a single line, Class-1, are colored with a red background color. Finally, those regions where a narrow and broad component were present, Class-3, are shown with a green background color. Given that the kinematics are quite complex, we start by identifying those regions which may show regular galactic rotation. The simplest kinematics to understand are those of Taffy-S in the {\it low-velocity component} of Figure \ref{fig:vel_moments}b. Here, the increasing iso-velocity contours, going from yellow to dark blue progress regularly in the double-line region, merging smoothly with the V$_{1n}$ (green underlying color) and single-component V$_s$ (red underlying color) contours. The velocity dispersion in the {\it low-velocity component} in Taffy-S is also low across most of its disk. Concentrating only on the {\it low-velocity component} for Taffy-S, it is clear that the velocities and dispersions shown in Figures \ref{fig:vel_moments}b and c look like a somewhat-warped, but regular rotating disk. In contrast, Taffy-S is much more peculiar in the {\it high-velocity component} of Figures \ref{fig:vel_moments}e and f, where the galaxy shows only a small amount of obvious rotation, as well as exhibiting a high velocity dispersion in a large part of the disk. It also has a band of spectra classified as broad-line (Class-3 type; green underlying color) in the nuclear regions. We now turn our attention to Taffy-N which has more complex kinematics. This galaxy is quite edge-on and may be expected to show regular rotation along its major axis. Evidence of rotation is seen in the {\it high velocity component} of Taffy-N in the southern part of the disk centered on the dense dust lane and nucleus. Figure \ref{fig:vel_moments}e, shows a clear rotation signature where the velocities (starting with the pale blue contours in the south-east) increase along the major axis (dark-blue contours in the north-west of the inner disk). Although this apparent regular rotation in the {\it high velocity component} is confined to the inner parts of the Taffy-N disk, the increasing trend in velocity shows a reversal towards the north-western extended disk. This might be interpreted as a turn-over in the rotation curve there. Next we consider those parts of the velocity field that cannot be considered normal, and are most likely caused by the strongly collisional nature of the Taffy pair. We have already pointed out in the discussion of the channel maps that the north-western part of Taffy-N has a peculiar low-velocity structure which is not part of the normal rotation. This can be seen in Figure \ref{fig:vel_moments}b ({\it low velocity component}) where much of the NW disk of Taffy-N shows very little rotation, and also shows high velocity dispersion (Figure \ref{fig:vel_moments}c). Here we see that the gas extends as a finger towards the south-west, where it forms a western bridge with Taffy-S. A second eastern bridge structure, is seen associated with the extragalactic HII region, which is strongest in the {\it high-velocity component}. The two structures are graphically emphasized in Figure \ref{fig:summary_fig}. Much of the gas between the two galaxies is seen in the eastern high-velocity bridge component, and shows a velocity gradient which extends between the two galaxies along direction of the radio-continuum bridge discovered by \citet{Condon1993}. Some regions of the high-velocity component bridge material have a high velocity dispersion--as was noted in the spectra in Figure \ref{fig:Fig3}. We will show that much of the bridge gas in the {\it high velocity component} has the excitation properties consistent with shocked gas. Several regions also show Class-3 spectra--which means that one component is broad. These regions (color green in all the panels of Figure \ref{fig:vel_moments}) are confined to positions along the minor axis of Taffy-S, and to a small region at the north-west tip of the same galaxy. We show an example of these kind of spectra in Figure \ref{fig:gaussfit}. Here we show how LZIFU has fit two components to the H$\alpha$ profile from the nucleus of Taffy-S after correcting for weak Balmer absorption. This appears as a region of high velocity dispersion in Figure \ref{fig:vel_moments}c and f. The H$\alpha$ line profile shown is an average over a 4 x 4 spaxel region centered on the X-ray hot-spot in Taffy-S. The two components have a FWHM of 320 $\pm$ 70 km~s$^{-1}$ and 205 $\pm$ 70 km~s$^{-1}$, with a small offset in velocity between the two. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{Fig7_revised.pdf} \caption{Contour maps, overlaid on a SDSS i-band image of the Taffy system, of the moments of the velocity field. Zero velocity corresponds to an (optically defined) recession velocity of 4350 km~s$^{-1}$. The top and bottom rows correspond to the low and high velocity component as shown. From left to right, the column panels show the integrated flux in H$\alpha$, radial velocity (with respect to systemic velocity), and velocity dispersion, respectively. The contour levels for the integrated line flux maps are: 2000, 3000, 6000, 12000, 15000, 20000, 25000, and 30000 in units of ${\rm erg\, s^{-1}\, cm^{-2}\, km\, s^{-1}}$. The contour levels for the velocity maps are: -350, -250, -200, -150, -100, 0, 100, 150, 200, 250, and 350 in units of ${\rm km\, s^{-1}}$. The contour levels for the velocity dispersion maps are: 50, 70, 90, 130, 160, 190, and 230 in units of ${\rm km\, s^{-1}}$. The red polygon demarcates the boundary where we see two line components in the profiles of the emission lines. The red spaxels outside the double line boundary indicate spaxels where we see only a single velocity component. The green spaxels indicate spaxels where we see two components but with significantly different widths (i.e., Class-3; we define the three line profile classes in \S\ref{sec:methods_vel_comp}).} \label{fig:vel_moments} \end{figure*} \begin{figure} \centering \plotone{Fig8_revised.pdf} \caption{The emission between the galaxies can be decomposed into two kinematically different bridges of emission, seen represented in this single channel map of the H$\alpha$ for velocities 4156 to 4226 km~s$^{-1}$ where parts of them both happen to appear at the same velocity. The two separate filaments are best defined by looking at the full range of channel maps shown in Figure \ref{fig:vel_channel_map}. The eastern bridge structure extends from the southern part of Taffy-N and extends down through the extragalactic HII region until eventually it merges with the south-eastern disk of Taffy-S. The western bridge extends from the north-west of Taffy-N into the bridge and eventually connects with the north-western tip of Taffy-S. The eastern bridge is more closely associated with the CO emission than the western bridge, although some clumpy regions are seen in CO even in the west (black contours are from \citet{Gao2003}, see text for more details).} \label{fig:summary_fig} \end{figure} \begin{figure} \centering \plotone{Fig9.pdf} \caption{The H$\alpha$ nuclear spectrum of Taffy-S (UGC~12914). The raw spectrum is shown (grey solid line), as well as the spectrum after correction by LZIFU for H$\alpha$ absorption (solid black line). We show the decomposition into two profiles, one with a broad lower-velocity component (blue dashed line) and a narrower slightly higher-velocity component (red dashed line)-see text. The spectrum emphasizes the importance of correcting for the continuum (driven mainly by the fit in the blue), since in this case the Balmer absorption masked a broader component in the emission line.} \label{fig:gaussfit} \end{figure} \section{Excitation of the Ionized Gas in the Taffy System} \label{sec:bpt_results} We next consider the possible excitation mechanisms for the ionized gas within the Taffy system by constructing emission line diagnostic diagrams based on LZIFU fitting of each spaxel in the data cube \citep[sometimes called BPT or VO diagrams;][]{Baldwin1981,Veilleux1987,Kewley2001}. We construct emission line diagnostic diagrams using the [OIII]$\lambda$5007/H$\beta$, [NII]$\lambda$6583/H$\alpha$, [OI]$\lambda$6300/H$\alpha$, and [SII]$\lambda,\lambda$6716,6731/H$\alpha$ ratios. We have significant detections of the [NII] and [SII] lines in most spaxels that fall on the galaxies and bridge, whereas the [OI]$\lambda$6300\AA\ line is detected in fewer spaxels because it is not as strong. We require every emission line used in the diagrams to be detected at the 3$\sigma$ level. In order to look for differences in excitation properties between the low and high velocity components where the lines are double, we show line diagnostic diagrams for each of the components separately (top panels a and b) in each of the Figures \ref{fig:nii_bpt}, \ref{fig:oi_bpt}, and \ref{fig:sii_bpt} for the [NII], [OI], and [SII] plots respectively. We also show the diagnostic diagrams for the total emission (sum of the two components plus those fitted by a single line) as a third panel (c) in each of the same Figures. These diagrams use classifications from \citet{Kewley2006}. In all three line diagnostic diagrams, we plot spaxels associated with different spatial regions of the Taffy-system. The symbols shown in the excitation diagrams are labelled in each Figure based on the regions defined in Figure \ref{fig:Fig3}. For example, green points and crosses represent regions in Taffy-N, while blue points represent Taffy-S (the nuclear region is distinguished as blue diamonds). The Taffy bridge is shown as red crosses (east bridge), and filled orange circles (west bridge). Because the west bridge contains fewer points than its eastern counterpart, we also present the integrated line ratio for the western bridge as a single larger open orange circle. The line diagnostic diagrams show that the low and high velocity components often behave differently, indicating a difference in their respective excitation mechanisms. For the two galaxies (blue and green symbols), all three sets of line diagnostic diagrams show spaxels mainly distributed within the HII, HII+AGN-composite or LINER part of the diagnostic diagram, with little hint of any pure AGN component. Much recent work has shown that excitation by shocks can resemble excitation by an AGN. \citet{Rich2011, Rich2014, Rich2015} showed that composite line ratios i.e., HII + AGN, can be due to HII + shocks. This is particularly true for merging galaxies, for example \citet{Rich2014} showed that merging U/LIRGS can present ``composite'' optical spectra in the absence of any AGN contribution, with increasing contribution from shocks as the merger stage progresses from early to late-mergers. We will argue below that much of the LINER emission, with the possible exception of the nucleus of Taffy-S, is likely the result of fast shocks exciting the ionized gas in the bridge and in significant parts of the galaxy disks. We will first concentrate on describing excitation in Taffy-N. We decided to split Taffy-N into two parts, one part covering the main disk of the galaxy and the other part covering the western extension which appears to connect with the western bridge. The emission from the main disk of Taffy-N (green points) is largely consistent with emission from HII-regions. This is true in all of the diagnostic diagrams in the low, high, and summed components. The western extension of Taffy-N shows a mix of HII and LINER emission. This is especially evident in Figure \ref{fig:oi_bpt} where the western extension of Taffy-N falls clearly in the LINER area in the low velocity component which we have previously noted from the channel maps may be associated with the western bridge. Next we consider Taffy-S (blue points and diamonds). There appears to be a strong mix of HII-region and LINER/composite excitation for Taffy-S in all diagnostic diagrams. Despite the fact that the nucleus of Taffy-S does not show evidence for a powerful AGN in the gas excitation diagrams, this does not necessarily preclude a low-luminosity active nucleus being present-especially given the broader line-widths discussed previously. Taffy-S's nucleus (blue diamonds on Figures \ref{fig:nii_bpt}, \ref{fig:oi_bpt}, \ref{fig:sii_bpt}) does show some evidence of being a LINER, especially in the low-velocity regime. We find that most of the spaxels located on the nucleus of Taffy-S in the low-velocity (V$_1$) line diagnostic diagrams are in the LINER area in the [OI] and [SII] diagnostic diagrams, and close to the AGN line in the [NII] diagram for both the low and high velocity components. However, as previously noted, the velocity difference between the two components is small ($<20$ km~s$^{-1}$), with the main difference being in the width of the lines, the V$_1$ component having a broader width than the V$_2$ component (see Figure \ref{fig:gaussfit}). This is consistent with Chandra X-ray observations \citep{Appleton2015}, which showed the possible existence of a low-luminosity AGN based on the X-ray hardness ratio. We divide the bridge into two parts (east and west) as discussed previously. The eastern bridge (red crosses) shows clear evidence of being HII-region excited in the low-velocity component. The situation is quite different for the {\it high-velocity V$_2$ component} (`b' panels) in Figure \ref{fig:nii_bpt}b, \ref{fig:oi_bpt}b. Here, we observe sets of east-bridge spaxels that deviate strongly from the HII area locus. For example, in the [OIII]$\lambda$5007/H$\beta$ ratio, the east-bridge points are spread out along the [NII]/H$\alpha$ and [OI]/H$\alpha$ line, extending strongly into the LINER area. The western bridge (orange filled circles) show a mix of HII-region and composite/LINER behaviour in all diagrams. This is quite similar to what we find for the western extension of Taffy-N indicating that they might be excited by the same processes. Because there are so few points from the western bridge, we also plot an average of the entire western bridge as an orange open circle which is only shown in the `all' components diagram. This falls in the composite/LINER area for the [NII] and [OI] diagrams but in the HII area for the [SII] diagram. \subsection{Evidence for shocked gas in the Taffy system} \label{sec:shocks_discussion} We first explore the possibility that the gas is excited by shocks. We over-plot predicted line ratios from shocks on Figures \ref{fig:nii_bpt}, \ref{fig:oi_bpt}, and \ref{fig:sii_bpt}, taken from the MAPPINGS III library of models \citep{Allen2008}. The model line ratios are plotted for different shock velocities as solid colored lines. Based on the models of \citet{Vollmer2012}, we assume that much of the gas in the bridge and throughout the galaxies is close to solar metallicity, since the gas has been stripped from the galaxies or has been excited {\it in situ}. For the shock models we therefore assume solar metallicity. The other parameters of the models include the pre-shock gas densities in the range $\mathrm{0.1 < n\, [cm^{-3}] < 1000}$ (stepping by a factor of 10 each time), and an assumed constant magnetic field of B=5 $\mu$G (this is close to the equipartition magnetic field strength of 8 $\mu$G measured by \citealt{Condon1993} through radio continuum measurements). We have plotted only the line ratios for the shock itself while excluding the precursor component of the shock. For the moderate shock velocities that seem compatible with the Taffy excitation velocities, it may be reasonable to ignore the effect of a strong ionizing shock-precursor, as we shall discuss later. For the east bridge points (red crosses), in the high-velocity component (`b' panels), it is clear that the east bridge points fall relatively neatly between the solid lines for shock velocities 175 km/s and 200 km/s (green and orange lines). For the west bridge points, in cases where the points deviate from the HII-region area they are consistent with the same shock velocity, e.g., the low-velocity components in all diagrams. {\it Thus the high-velocity east bridge component and the low-velocity west bridge component seem consistent with shock excitation for all the line diagnostic diagrams.} The situation is mixed for the disks of galaxies themselves. As Figures \ref{fig:nii_bpt}b, \ref{fig:oi_bpt}b and \ref{fig:sii_bpt}b show, there are some points within the disks of both galaxies which fall in the composite region of the diagnostic diagram. Some of these points would be consistent with a mixture of HII region and shocked gas excitation. The nucleus of Taffy-S is an ambiguous case, because it could be excited by shocks in a mild outflow, or may be gas excited by UV emission from a weak LLAGN. The spreading of the points along lines of constant [OIII]$\lambda$5007/H$\beta$ ratio in the high-velocity component in the bridge has a number of possible interpretations if we assume that shocks are involved. Firstly, the spread might imply that the shocks are occurring in an ensemble of gas clouds with different pre-shock densities. Such a picture is consistent with our previous observations of the Taffy bridge \citep{Peterson2012, Peterson2018} where we have observed gas in many different excited phases, from HI \citep{Condon1993} to warm molecular gas from the {\it Spitzer} IRS; along with the detection of [CI] and [CII] emission \citep{Peterson2018}, and boosted values of [CII]/FIR and [CII]/PAH ratios. The existence of a highly multi-phase (and multi-density) medium is also very consistent with the detection of soft X-ray emission from the bridge. Thus it might be expected that shocks moving through such a multi-phase gas would encounter a range of pre-shock densities--which would spread the points along lines of constant shock velocity--as observed in Figures \ref{fig:nii_bpt} and \ref{fig:oi_bpt} especially. An alternative explanation might be that some of the gas is of lower metallicity. As the models of \citet{Allen2008} show, reducing the metallicity of the shocked gas moves the points in the diagnostic diagrams to the left at roughly constant values of [OIII]$\lambda$5007/H$\beta$ ratio. However, if the collisional models of \citet{Vollmer2012} are correct, the gas in the bridge should have come from many different places within the original pre-collisional disks, and deviations of factors of 100 in metallicity in the bridge seem unlikely. We conclude that it is much more likely that we are observing shocks within the bridge and parts of the galaxy disks which encounter clumps of material at different densities. What is the effect of ignoring the possible influence of a hot shock precursor in the models? This is an effect where, in high velocity shocks, the gas in the shock is so strongly heated that UV radiation from the shocked gas ionizes large amounts of pre-shocked gas upstream of the shock. We show in Figure \ref{fig:Fig13}, an example of the [NII] line diagnostic diagram, the effect of including the shock and the shock precursor \citep{Allen2008}. As can be seen by comparison with Figure \ref{fig:nii_bpt} the behaviour when we include the shock precursor with velocities $<300$ km~s$^{-1}$\ is very similar to the case with no shock precursor, which fits our data well. At shock velocities $>300$ km~s$^{-1}$\ the models including the shock precursor diverge significantly from these data. Similar behaviour is noted in the other diagnostic diagrams (not shown). This implies that the shock models between 100-300 km~s$^{-1}$\ fit the bridge data well regardless of whether the precursor is included. We note, however, that in dense gas (e.g.\ molecular gas known to also be present in the bridge) the velocity at which a shock precursor may become important will be much lower. Therefore future modeling the molecular shocks may have to take precursor activity into account. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{Fig10a.pdf} \includegraphics[width=0.49\textwidth]{Fig10b.pdf} \includegraphics[width=0.49\textwidth]{emissionline_diagnostic_legend.pdf} \includegraphics[width=0.49\textwidth]{Fig10c.pdf} \caption{[NII]$\lambda$6583\AA\ line diagnostic diagrams. (a): for the low velocity component, (b): for the high velocity component, and (c): total, i.e. sum of both velocity components and also including line ratios from spaxels which show a single component. Each point here represents the line ratios from a single spaxel. The red crosses, green points, and blue points correspond to the eastern bridge, Taffy-N, and Taffy-S, respectively. The blue diamonds are spaxels that fall within the nuclear region of Taffy-S. The green pluses are spaxels from the western part of Taffy-N. The orange circles are spaxels from the western part of the bridge. The unfilled orange circle is the average of line ratios from all the spaxels within the western bridge. These colors are consistent with those used to denote the corresponding regions in Figures \ref{fig:Fig3} and \ref{fig:summary_fig}, with the exception of the western part of Taffy-N which is shown as a magenta polygon in Figure \ref{fig:Fig3}. The panels also show line ratios from the MAPPINGS III shock models \citep{Allen2008} overlaid on the measured line ratios i.e. colored solid lines. The parameters assumed in the models are Z=Z$_\odot$ and B=5\,$\mu$G. Along each shock velocity line the points are marked by increasing number density. The classifications are from \citet{Kewley2006}. The shaded gray area around the solid black line classifying the HII-region excited gas marks the area we used to put a lower limit on the fraction of gas excited by star formation (see \S\ref{sec:frac_from_sf}). The width of the shaded area (above the HII classification line) is twice the size of the average error bar in each panel.} \label{fig:nii_bpt} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{Fig11a.pdf} \includegraphics[width=0.49\textwidth]{Fig11b.pdf} \includegraphics[width=0.49\textwidth]{emissionline_diagnostic_legend.pdf} \includegraphics[width=0.49\textwidth]{Fig11c.pdf} \caption{Same as Figure \ref{fig:nii_bpt} but using the [OI]$\lambda$6300\AA\ line. This Figure contains fewer points than the [NII] and [SII] line diagnostic diagrams due to the [OI] line being much weaker than the [NII] and [SII] lines and therefore being undetected in many spaxels.} \label{fig:oi_bpt} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{Fig12a.pdf} \includegraphics[width=0.49\textwidth]{Fig12b.pdf} \includegraphics[width=0.49\textwidth]{emissionline_diagnostic_legend.pdf} \includegraphics[width=0.49\textwidth]{Fig12c.pdf} \caption{Same as Figure \ref{fig:nii_bpt} but using the sum of the [SII]$\lambda$$\lambda$6716,6731 lines.} \label{fig:sii_bpt} \end{figure*} \subsection{Alternatives to Shock Excitation: Diffuse Lyman Continuum emission leaking from HII Regions?} In a recent paper by \citet{Weilbacher2018}, it was noted that the Antennae galaxies (NGC 4038/39), like the Taffy system, also exhibit significant diffuse ionized gas emission. Although several mechanisms were put forward to explain the emission, including the possibility of shocks, the authors favor an interpretation that much of the diffuse gas is ionized by Lyman-continuum photons (hereafter Ly-C) "leaking" from large numbers of HII regions found primarily in the disks of both galaxies, as well as HII regions found within the "overlap region". Much (but not all) of the diffuse component was found close to massive star formation complexes in the system. Using multi-color HST imaging of the clusters, the authors were able to compare the luminosity of the H$\alpha$ emission associated closely with a cluster with theoretical models of the Ly-C flux from the clusters to determine whether the clusters were "leaking" Ly-C photons into the surrounding gas. It was found that many of the HII regions had non-zero escape fractions of Ly-C UV radiation, especially in the center of NGC 4038 and also in the "overlap" region. Thus, for the Antennae system, it was found that the excess diffuse emission within the system could be explained as gas excited by UV radiation escaping from the clusters. Could some, or all of the extended ionized gas emission we see in the Taffy come from similarly degraded UV light from HII regions in the disks of the Taffy galaxies that might diffuse outwards and ionize large parts of the Taffy bridge? We estimate (Table \ref{tab:flux_table}) that the amount of H$\alpha$ emission coming from the bridge, after correcting for extinction, is similar to that from Taffy-S and roughly 50$\%$ of the emission from Taffy-N. In this case the majority of the escaping photons would have to come from the galaxies themselves, since the star formation rate in the Taffy bridge is very low. Unfortunately, unlike the case of the Antennae, we do not have multi-color high resolution images of the individual star clusters, and this means that it is difficult to perform the same kind of test that was applied, for all individual HII regions, by \citet{Weilbacher2018}. As a result, we cannot completely rule out a significant contribution to the ionized medium in the Taffy coming from leaky HII regions. Nevertheless, many other lines of evidence already suggest that shocks must be present in the Taffy bridge, and so we prefer the shock explanation for the excitation of the high-velocity component, rather than leaky HII regions. Future observations will be needed to attempt to model the history of star formation in the clusters in the galaxies, which will allow us to estimate the fraction of UV emission which may escape into the surround gas. This is beyond the scope of the current paper. \section{Ionized Gas Fractions, Star formation Rates and Mass in the ionized Component} \label{sec:frac_from_sf} Here we describe the method that we employed to estimate a lower limit to the fraction of ionized gas excited by star-formation (as opposed to being shock excited) using the [NII] line diagnostic diagram. We use the [NII] line diagnostic diagram since it contains the most number of points. We start by defining an effective HII-region excitation area. This is shown as a shaded gray region in the [NII] line diagnostic diagrams of Figure \ref{fig:nii_bpt}. We defined this area simply by ``padding'' the HII classification line \citep{Kewley2006} by twice the size of the average error in the y-direction (the y-error being the larger error). We sum up the H$\alpha$ flux in each spaxel that falls within this region. This flux is divided by the total H$\alpha$ flux to arrive at the lower limit for the fraction of ionized gas excited by star-formation. This process is repeated independently on each velocity component. The fraction derived this way is a lower limit because the other spaxels, outside of the shaded area (i.e.\ in the HII+AGN area) will contain emission from gas excited by star-formation, and we cannot accurately disentangle the excitation from shocks and star-formation (also see text in \S\ref{sec:shocks_discussion}). The lower limits for the fraction of ionized gas excited by star-formation that we derived are 64\% and 46\% for the lower and higher velocity component, respectively. For the purposes of the calculations of star formation rate (in \S\ref{sec:sfr}) and ionized gas mass (in \S\ref{sec:ion_mass}) we estimate the H$\alpha$ luminosity, coming only from star-formation, by $\mathrm{L(H\alpha)_{SF} = 0.64\,L(H\alpha)_{low} + 0.46\,L(H\alpha)_{high}}$; where $\mathrm{L(H\alpha)_{low}}$ and $\mathrm{L(H\alpha)_{high}}$ are the extinction corrected total H$\alpha$ luminosities in the low and high velocity components, respectively. This gives us an extinction-corrected value of $\mathrm{L(H\alpha)_{SF} = 4.99 \pm 0.54 \times 10^{41}\, erg\, s^{-1}}$ for the lower limit to the H$\alpha$ luminosity resulting from star-formation for the Taffy system. \subsection{Star formation rate estimate from H$\alpha$ luminosity} \label{sec:sfr} Using the following relation from \citet{Kennicutt1998} we estimate the star formation rate (SFR) in the entire Taffy system and the bridge region (using the entire region defined as the bridge in Figure \ref{fig:Fig3}), including H$\alpha$ emission from the extragalactic HII region. \begin{equation} \mathrm{\psi [M_\odot\, yr^{-1}] = 7.9 \times 10^{-42} \, L(H\alpha) [erg\, s^{-1}]} \end{equation} We obtain $\mathrm{L(H\alpha)_{SF} = 4.99 \pm 0.54 \times 10^{41}\, erg\, s^{-1}}$ and $\mathrm{L(H\alpha)_{SF;bridge} = 1.02 \pm 0.14 \times 10^{41}\, erg\, s^{-1}}$ for the extinction-corrected H$\alpha$ luminosity coming from star-formation for the entire Taffy system and bridge respectively, using the method described previously. This translates to SFRs of 3.94 $\pm$ 1.0 M$_\odot$ yr$^{-1}$ and 0.81 $\pm$ 0.22 M$_\odot$ yr$^{-1}$ respectively for the entire Taffy system and the bridge. These SFRs agree well with our previous estimates derived from UV-FIR SED fitting \citep{Appleton2015} of 3.65~$\pm$~0.03 M$_\odot$ yr$^{-1}$ and 0.69$\pm$0.06 M$_\odot$ yr$^{-1}$ respectively for the total system and the bridge. Interestingly, recent observations with the Atacama Large Millimeter Array (ALMA) show dense filaments of molecular gas, in the Taffy bridge, with little star-formation in them. These ALMA observations and the overall star-formation properties will be discussed in a future paper (Appleton et al. in preparation). \subsection{Ionized gas mass} \label{sec:ion_mass} The mass of ionized gas in the Taffy system, and in the bridge assuming Case B recombination \citep{Macchetto1996,Kulkarni2014} is given by : \vspace{0.2cm} M$_{ion}$ = 2.33 $\times$ 10$^3$ (L$_{\mathrm{H}\alpha}$/10$^{39}$)(10$^3$/n$_e$) M$_{\odot}$ \vspace{0.2cm} where L$_{\mathrm{H}\alpha}$ is the extinction corrected H$\alpha$ luminosity in units of erg s$^{-1}$, and n$_e$ is the electron density. For the bridge, from the lower limits to the ionized gas fractions from star-formation (see above) we obtained $\mathrm{L(H\alpha)_{SF;bridge} = 1.02 \pm 0.14 \times 10^{41}\, erg\, s^{-1}}$. We also have n$_e$=200 cm$^{-3}$, based on the ratio of the [SII] lines. This then gives us M$_{ion}$ = 1.19 $\pm$ 0.22 $\times$ 10$^6$ M$_{\odot}$. This calculation is uncertain because the H$\alpha$ emission originates from two different processes -- HII regions and very likely shocks. However, it does show that the ionized gas mass is an insignificant fraction ($\sim$0.2\%) of the total mass of gas in the bridge ($\sim$7 $\times$ 10$^9$ M$_{\odot}$; made up of a mix of HI and H$_2$). This is in agreement with the very low ionized gas fraction responsible for exciting the [\ion{C}{2}]157.7$\mu$m far-IR cooling line in the bridge \citep{Peterson2018} determined from the upper limit to the detection of [\ion{N}{2}]206$\mu$m in the bridge. For the Taffy system as a whole, using the extinction corrected H$\alpha$ luminosity coming from star-formation, $\mathrm{L(H\alpha)_{SF} = 4.99 \pm 0.54 \times 10^{41}\, erg\, s^{-1}}$, we get M$_{ion}$ = 5.8 $\pm$ 1.0 $\times$ 10$^6$ M$_{\odot}$ for the mass of ionized gas in the entire Taffy system. Again, this is an insignificant fraction ($\sim$0.8\%) of the total gas mass in the Taffy system. \subsection{Post-starburst populations} \label{sec:hbeta_ew_results} We detect H$\beta$ absorption lines within many spaxels on the galaxies. The spectra of post-starburst galaxies are known to contain strong Balmer absorption lines due to their stellar populations being dominated by A type stars. Evidence of a post-starburst population is not uncommon in merging galaxies \citep[e.g.][]{Zabludoff1996, Yang2004, Yang2008}, but attempts to measure the age of the stellar population are difficult, especially when only H$\beta$ is observed \citep[see for e.g.][]{Worthey1997}. Since our spectral coverage did not include other post-starburst indices, we can only provide preliminary results here. Further observations using full UV-optical SED, better absorption line indices, and detailed modeling \citep[e.g.][]{French2016} will be needed to obtain a better estimate for the age for the population which is responsible for the H$\beta$ absorption. We measured the EW of the H$\beta$ absorption line for each spaxel that contained either the galaxies or the bridge. The EW was measured using, \begin{equation} \mathrm{W(H\beta)~[\text{\AA}]} = \frac{\int_{line} f_\lambda d\lambda}{\left<f_{\lambda;cont}\right>} \end{equation} where the integral is done over the continuum subtracted absorption line fit and $\left<f_{cont}\right>$ is the average continuum value measured on either side of the H$\beta$ line. LZIFU provides as output the fit to the stellar continuum and the nebular emission lines separately. We use the continuum fit cube to refit a Gaussian absorption line to the region centered on the H$\beta$ absorption. The parameters from this fit then give the area within the absorption line and the average continuum is measured in a band of $\sim$10 spectral elements on both sides of the line. Figure \ref{fig:halpha_sdss}b shows the measured EW map for the Taffy system. Relatively deep H$\beta$ absorption (W(H$\beta$) $>$ 10) occurs where there is little H$\alpha$ emission. Two regions of high W(H$\beta$) lie in the northern and southern parts of the faint stellar ring that surrounds Taffy-S. Another region, in the south-eastern part of Taffy-N, does extend into regions where there is some star formation and older stellar populations are probably present, with the deepest absorption lying outside of the main star formation disk. It is generally true that values of W(H$\beta$) of 10 $<$ W(H$\beta$) $<$ 20 implies stellar evolutionary ages for the post-starburst populations of several 100 Myr, and this would imply that there is no connection between the post-starburst population and the current collision between the two galaxies \citep[e.g.][]{Worthey1997}. From dynamical arguments it has been argued that the collision between the two Taffy galaxies is quite recent \citep[approximately 25-30 Myrs;][]{Vollmer2012}, and so the fact that the outer ring of Taffy-S shows an old population would lead to an apparent problem, \emph{if} the stellar population of the ring was created in the collision. However, one solution might be that the stellar population of the ring is a stellar density wave containing a much older pre-collisional system which had undergone star formation a long time in the past. This is also the conclusion reached by \citet{Jarrett1999} who argue that the ring consists of stars from an old disk population from the pre-collision disk. The fact that both galaxies contain evidence of post-starburst activity might imply that the galaxies underwent a high-speed encounter in the more distant past which triggered star formation, but did not lead to immediate merger. If that is the case, then we are probably currently witnessing the second (probably final) collision before full merger. Broader wavelength coverage in the blue, to detect more absorption lines and better characterize the age of the post-starburst population, will be needed to test this idea. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{Fig13a.pdf} \includegraphics[width=0.49\textwidth]{Fig13b.pdf} \caption{The effect of including the only the shocks compared to including shocks with the shock precursor, from the models of \citet{Allen2008}, for the [NII] diagnostic diagram. The symbols (and their colors) and shock model parameters are the same as the line diagnostic diagrams of Figures \ref{fig:nii_bpt}, \ref{fig:oi_bpt}, and \ref{fig:sii_bpt}. The included data points are the same as those on the ``All Velocity Components'' diagram of Figure \ref{fig:nii_bpt}.} \label{fig:Fig13} \end{figure*} \section{Conclusions} \label{sec:conclusions} Using visible IFU data, from the VIRUS-P instrument on 2.7m telescope at McDonald Observatory, for the Taffy system, we have shown the results summarized below. \begin{itemize} \item{We detect widespread ionized gas within the disks of the Taffy galaxies and the bridge which exhibit very disturbed kinematics, including many regions with double line profiles and emission regions that do not follow regular rotation. Although both galaxies show velocity components that approximate gas rotation around their centers, both galaxies also show peculiar motions, often associated with gas which extends into the bridge between them. The gas associated with Taffy-N (UGC~12915) contains a major kinematic component that does not take part in regular rotation, but exhibits a high velocity dispersion and forms a narrow western bridge to Taffy-S (UGC~12914). Taffy-S, although showing the kinematics of a likely tidally-warped but regular (counter-)rotating disk, also contains a peculiar kinematic component that is associated with a second ionized gas bridge. This eastern bridge component, which extends from Taffy-N, through the region containing the extragalactic HII region and eventually connecting with the Taffy-S, is more closely associated with the molecular bridge seen previously to extend between the galaxies. On the other hand, the western bridge, which is much fainter, appears to be kinematically linked with the western extension of Taffy-N (and the western part of Taffy-S) and shows a mix of HII-region and composite/LINER excitation unlike the main disk of Taffy-N which is largely consistent with HII-region excitation. } \item{An analysis of the excitation of the ionized gas through diagnostic line ratios shows that a significant fraction of the emission shows a mix of HII-region and LINER-type emission, especially in the areas where we can clearly distinguish two velocity components. The LINER-type emission is especially dominant in the high velocity component in the east-bridge region, but also over significant portions of both galaxies. We observe emission line ratios in the high-velocity component, for the east bridge and the low-velocity component of the west bridge, that are consistent with the gas being excited by shocks with velocities of $\sim$175-200 km/s, and a range of pre-shock densities. Such evidence for shocks permeating clouds of varying density is consistent with previous observations of the bridge by {\it Spitzer} and {\it Herschel}, where strong mid- and far-IR cooling lines are detected from warm molecular and diffuse atomic gas heated by turbulence. While we cannot rule out a contribution to the diffuse ionized emission from Lyman-continuum photons leaking from young HII regions embedded in the disks of both galaxies, the weight of evidence, from previous multi-wavelength observations of the bridge, suggests that shocks are a more likely explanation for the LINER-type emission line ratios seen in the bridge and in parts of the disks of both galaxies. Given the violence of the collision based on previous numerical models of head-on collisions which suggest that the bridge is still in a highly disturbed state, such shocks may not be unexpected. } \item{Strong Balmer absorption lines ($10 < \mathrm{W(H\beta)} [\text{\AA}] < 15$) are observed in parts of the ring associated with Taffy-S (UGC~12914), as well as the south-east portion of the edge-on disk of Taffy-N (UGC~12915). The absorption lines are strongest in regions where the ionized and molecular gas distributions are weak, suggesting that parts of the Taffy system have experienced a burst of star formation in the past, perhaps from a previous close passage of the two galaxies. If so, it is possible that the current collision may be a second, more dissipative collision, that will likely lead to merger in the near future. Further observations, with a broader blue wavelength coverage than the current observations, will be necessary to better determine the age of the post-starburst populations in both galaxies. } \item{Although there has been only weak evidence in the past for the nucleus of Taffy-S (UGC 12914) containing a low-luminosity AGN from X-ray properties \citep{Appleton2015}, we detect line widths in the nucleus as large as 320 km~s$^{-1}$. These line-widths are typical of narrow-line Seyfert galaxies. The broad lines appear to extend over 6-10 arcsecs along the minor axis. The excitation properties of the nuclear gas are consistent with LINER emission, but without higher spatial and velocity resolution data we cannot determine whether the Taffy-S nucleus hosts a weak shocked highly confined outflow from a nuclear starburst, or is excited by UV radiation from a LLAGN. } \item{We provide evidence, supporting much previous work, that the Taffy system has atypically low SFRs for a system having recently undergone a recent major-merger. We find SFRs of 3.94 $\pm$ 1.0 M$_\odot$ yr$^{-1}$ and 0.81 $\pm$ 0.22 M$_\odot$ yr$^{-1}$ in the entire Taffy system and the bridge (including the extragalactic HII region), respectively. Low star formation rates in this recent post-collisional remnant may result from the highly disturbed nature of the gas in the galaxies and bridge.} \end{itemize} \acknowledgments BAJ would like to thank the Visiting Graduate Fellowship Program at IPAC/Caltech for 6 months support towards work performed for this paper. BAJ is also grateful to Drs.\ Rogier Windhorst and Rolf Jansen for helpful discussions on work done in this paper. The authors thank the anonymous referee for their helpful review and suggestions. This research has made use of NASA's Astrophysics Data System. This research has made use of the Python programming language along with the Numpy, Scipy, and Matplotlib packages. This research has also made use of Astropy, a community-developed core Python package for Astronomy \citep{astropy2018}.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,325
**Scottish Independence** _Weighing Up the Economics_ **Gavin McCrone** has studied, written and lectured about the Scottish economy over a period of many years. He was a Fellow and Tutor in economics at Brasenose College, Oxford, in the 1960s. He then spent two decades as Chief Economic Adviser to successive Secretaries of State for Scotland. He was successively head of two Scottish Government Departments – the Industry Department for Scotland and the Scottish Development Department. He returned to his previous career in the 1990s as a professor of economics, first at Glasgow University and then at the Edinburgh University Business School. He is a Fellow of the Royal Society of Edinburgh and was a Vice President of the Society from 2002 to 2005. This eBook edition published in 2013 by Birlinn Limited West Newington House Newington Road Edinburgh EH9 1QS _www.birlinn.co.uk_ Copyright © Gavin McCrone 2013 Foreword © Magnus Linklater 2013 The moral right of Gavin McCrone to be identified as the author of this work has been asserted by him in accordance with the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this publication may be reproduced, stored or transmitted in any form without the express written permission of the publisher. ISBN: 978-1-78027-159-0 eBook ISBN: 978-0-85790-668-7 British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library To my family, for whom Scotland's future is important #### **Contents** _Foreword by Magnus Linklater_ _Preface_ 1 How Well Off Are We? 2 Devo-Max, Devo-Plus and the Status Quo 3 The Scope for an Independent Economic Policy 4 Scotland and Europe 5 Could an Independent Scotland Have Handled the Failure of the Banks? 6 Scotland's Energy Future 7 North Sea Oil – the Mishandling of an Opportunity 8 Welfare and Inequality 9 Conclusion _Notes_ _Index_ #### **Foreword** As the date for a referendum on Scottish independence grows closer, arguments for and against what would amount to the greatest constitutional change in Britain for more than 300 years have grown intense. A sense of something approaching national anxiety can be discerned as Scots of every persuasion seek answers to the fundamental questions that will govern the outcome before they go to the polls in September 2014. Would an independent Scotland be worse or better off? More pertinently, perhaps, would it have the capacity to flourish? Or would the ending of the Union expose the country to growing hardship at a time of economic uncertainty? Professor Gavin McCrone brings more than 40 years' experience to bear on these crucial issues. As an academic and, for many years, a senior civil servant, he has been at the heart of economic planning in Scotland – most significantly as Chief Economic Adviser to the Scottish Office from 1970 to 1992 – for much of his career. He approaches the subject from an objective standpoint, examining each aspect with a strong command of statistics and a dispassionate assessment of their merits. More importantly, perhaps, he comes unswayed by bias. On the one hand, he has advised successive United Kingdom governments on economic policy – a background synonymous with the constitutional status quo. On the other hand, as a senior civil servant in 1974, he compiled a report for ministers on whether North Sea oil revenues would allow an independent Scotland to manage financially. He concluded not only that they would but that they had the capacity to transform the country's fortunes. His paper remained confidential at the time but, if it had been publicly available, the course of Scottish politics might have been very different. So neither side can afford to ignore Professor McCrone's analysis. He is, in journalistic terms, 'a reliable source'. More than that, he has the great merit of clarity. He examines each aspect of the independence debate with a combination of straightforward analysis and simply expressed conclusions, providing the bare minimum of statistics, set out in a helpfully comprehensible style. He addresses head on the questions that most trouble voters – whether floating or not. How wealthy a nation is Scotland? How dependent would it be on oil revenues? Would independence allow the country sufficient flexibility on taxation to bolster its economy? Could it afford to fund the welfare state on which it has grown to depend? Would it gain rapid entry to the EU and, if so, would it have to join the euro? What are the implications of adopting the pound as its currency? Could an independent Scotland have weathered the collapse of its once powerful banks? How viable is its energy policy? Above all, is North Sea oil the key that would unlock its potential or is it a diminishing and unreliable asset? In addition to these critical issues, Professor McCrone examines the case for other options facing the nation: differing forms of devolution such as the proposals contained in the current Scotland Act; the so-called Devo-Max plan for full-scale fiscal independence; and its less extreme version, Devo-Plus. He questions the assumptions behind each, making the important point that they would all, in different ways, impinge on other areas of the United Kingdom – not always to beneficial effect – and goes on to develop his own favoured alternative. The backcloth to these arguments is a decision which will confront every person of voting age living in Scotland. It is a more fundamental one than any they have voted on before. Unlike a general election, where the choice, however far-reaching, can be changed in five years' time, this one is irreversible. If Scotland does, indeed, secede from the United Kingdom to form an independent state, it cannot then decide to rejoin if the outcome is not to its liking. Equally, if the decision is to remain part of the United Kingdom, then that too is one that will endure for many years – 'at least for a generation', in the words of the First Minister, Alex Salmond; and, if any attempt were made to revisit it more frequently than that, it would, in all likelihood, be strongly resisted by the other countries of the UK because it would be destabilising for all of them. It is therefore important that the implications of this critical choice are fully understood by those who will make it. While many people will rest their decision on personal or emotional grounds, opting perhaps to stay within the United Kingdom because of family connections or historical legacy, others will feel equally strongly that it is precisely this history that urges them towards the re-establishment of an independent Scottish nation. Whatever the reasons influencing their vote, it is important that everyone who takes part in the referendum has a clear understanding of the implications for Scotland and the rest of the United Kingdom. They need answers to the questions that have arisen on the way towards the final decision and clear guidance on how best these can be answered. This book provides the road map that should be the essential companion for all those charged with deciding the future direction of their country. Magnus Linklater June 2013 #### **Preface** In 1707 when the members of the Scottish and English Parliaments passed the Act of Union, parliaments were very far from being representative of the people. But, in 2014, it will be for every person of voting age living in Scotland to decide on their country's constitutional future. There are many Scots living in other parts of the United Kingdom or abroad who have views on this matter and feel they should have been able to vote. But, apart from the major complications that would introduce, I agree with the view taken by the Scottish and UK governments that it is right for people living in Scotland to be the ones taking the decision. It is they who will live with the consequences of that decision, whatever it may be. I have no doubt that Scotland could prosper either as an independent country or if it chooses to remain part of the United Kingdom. But the consequences will be substantial whichever way the decision goes. So long as the issues are properly understood – or at least as well understood as the available information allows – there should be no complaint with the decision. This book weighs up the economic issues – it does not attempt to deal with other important issues, such as defence. It is in the belief that many economic aspects of the decision are not well understood, because people lack the information that they need, that I have written this book. I have never been a member of a political party and I am beholden to no person or group. I have tried to be as objective as possible in setting out the issues. I realise that this will not satisfy everyone. There will inevitably be those who will disagree with some of the judgements I make. But, if so, I hope that the arguments can be assessed on their merits rather than on the basis of preconceived ideas. In the following pages, there will be criticism of arguments put forward by the present Scottish government, but there will also be plenty of criticism of what Westminster governments have said and done. As I write, the independence debate is constantly developing. Scarcely a day goes by without either some change in government policy, a fresh set of statistics, the publication of a report on the subject or just comment in the press. Other books are being published or are planned. I should make it clear therefore that this book went to the publisher at the end of March 2013 and I have not been able to offer an opinion or comment on anything that was published after that date. I am grateful to Jeremy Peat, Professor David Bell and Sir David Edward, who have each read chapters, and to my son, Angus, who has read much of the book. All have offered valuable comments. Any errors or omissions that remain, however, are my responsibility alone. I am also grateful to my wife who has uncomplainingly tolerated the many hours I have spent absenting myself from other activities to be in my study writing this book. Gavin McCrone June 2013 #### 1 #### **How Well Off Are We?** Economic arguments have formed a large part of the SNP's case for independence, ever since the growth in support for their party in the late 1960s. For an independence movement this is unusual, although it appears now also to be a factor in Catalonia. Most commonly, when countries split to form independent states, it is because of differences in culture or serious grievances about the way they have been treated. Whatever the economic consequences, they take the view that they simply do not want any longer to be part of the larger state with which they have been associated. There have been numerous examples – the breakup of the Soviet Union, the collapse of Yugoslavia and even the independence of what then became the Irish Free State and is now the Irish Republic. In this latter case, although the economic condition of Ireland within the UK during much of the previous century and right up to the First World War certainly gave grounds for serious grievance, even there, as with the other countries, little if any detailed argument about the economic consequences of independence or the policies that a separate state might pursue took place. Indeed, what really seems to have brought the issue to a head in Ireland was not so much its economy as the savage and ill-judged reaction of the UK government to the 1916 uprising in Dublin, which resulted in many of the leaders being executed. Scotland has its own distinct culture and history. Moreover, during my lifetime I have witnessed the development of a growing awareness of Scotland's separate identity and the confidence that goes with that. Nevertheless, it is not so difficult to understand why the argument about the economy features as much as it does in the Scottish context. Scotland had its industrial revolution early and, during those years, the economy grew rapidly. But this early success left a legacy of problems that was to dominate the economy for much of the 20th century, as it did also in the north of England and South Wales, when the traditional industries of coal, steel, textiles and shipbuilding, together with associated engineering, went into decline. While, in the post-war decades, unemployment remained low by present day or pre-war standards, it was frequently twice the rate for the UK and net emigration was extremely high, amounting over the decades of the 1950s and 1960s to a total of 609,000. Approximately half of this was to the rest of the UK and half overseas. This was equivalent to 30 per cent more than the whole population of Edinburgh. There were serious problems of deprivation in some of the industrial areas, notably in the west of Scotland – a problem that persists to this day. Scotland was, of course, not the only part of the UK suffering these problems. But they gave rise to a feeling in Scotland that the country's economy was somehow not doing as well as it should and that the UK government in London was not doing enough. The UK government, through its regional development policy, especially during the 1960s and 1970s, when this policy was at its height with substantial funds devoted to it, attempted to deal with this problem. In addition to substantial grants available to encourage industrial investment in areas of high unemployment, the Highlands and Islands Development Board (HIDB) was set up in 1965 and the Scottish Development Agency (SDA) in 1975. Considerable success was achieved through the introduction of new industries, most notably, but by no means exclusively, electronics. Indeed, Scotland was the most successful part of the UK in attracting inward investment from overseas and, after Ireland, one of the most successful in Europe. But this did not eliminate the problem and the success with investment in the electronics industry received a severe setback after 2000, when the industry encountered a recession and much of the new investment went to countries with lower labour costs. What had been achieved was not always recognised and the details of successive regional development policies were largely lost on the general public. After the 1979 election, the new Conservative government's philosophy was against intervention and in favour of giving full rein to market forces. Assistance through regional policy was scaled down, though it still continues, as do SDA and HIDB. But both of these agencies were significantly modified in the early 1990s and renamed Scottish Enterprise and Highlands and Islands Enterprise. Their scope and remit were again changed by the SNP government after its election in 2007. The election of 1979 was followed by a period of severe economic difficulty in Scotland, as it was also in many parts of England, especially in the north. The tight monetary policies followed by the government resulted in the closure of many industrial firms, not only those in the older heavy industries of shipbuilding, steel, coal and heavy engineering but also some inward investment companies, the motor industry at Bathgate and Linwood, the aluminium smelter at Invergordon and many new businesses that had set up in Scotland. It was at this time that Scotland lost much of its manufacturing industry. It was ironic that, as North Sea oil production began to flow in substantial quantities, it also adversely affected much of Scotland's existing manufacturing through strengthening the UK's balance of payments and pushing up the exchange rate for the pound, so that many businesses became uncompetitive. Indeed, in these years, the decline of existing industry appeared to outweigh the very welcome benefits to companies that took advantage of the opportunities available from oil-related activity. I thought at the time that policies were needed to try to counter this adverse effect because, even if much of this was inevitable, it resulted in high unemployment and great distress. The result was that the unemployment rate in Scotland peaked at just under 14 per cent in 1986 – much higher even than in the recent severe recession. The sense of grievance stemming from the difficulties in the economy in past decades has therefore been a major factor in the growth of support for independence, even if now Scotland's performance relative to the rest of the United Kingdom is significantly improved. Some people felt that Scotland's economic performance, as part of the UK, was below its potential and started to question whether it might do better on its own. This feeling received a major boost when North Sea oil and gas were discovered in the 1970s. The vast bulk of the oil discoveries (though not the gas) were off the Scottish coast and, under international rules, would have been in Scotland's offshore territory were it an independent state. The importance of this seemed at first to be underestimated by the UK government and it was some time before appropriate policies to give benefit to the state were put in place but, once this was done, the revenues from taxation were very large indeed and of major benefit to the UK Exchequer. No longer did it seem so persuasive to the general public to argue that Scots would be worse off if their country became independent. It was no surprise therefore that support for independence grew. **How Wealthy is Scotland?** Scotland's relative economic position within the UK is now enormously better than it was in the early 1970s. The strength of an economy is assessed by using statistics that measure the total of goods and services produced. Two measures are widely used – gross domestic product (GDP) and gross value added (GVA). The difference between the two is not important so long comparisons are consistent. Scotland's gross value added (GVA) per head, at 98.6 per cent of the UK average in 2011, was exceeded only by London and the South East of England (Table 1). At a lower level of aggregation, the north east of Scotland is now one of the most prosperous parts of the UK with a GVA per head of 144 per cent of the UK average, second only to Inner London. This compares with the situation in the late 1950s and 1960s when Scotland's GDP per head was around 10 per cent below the UK average and in some years even lower, making it one of the poorest parts of the UK. In contrast, Wales and the Northern Region of England both seem to have fallen somewhat further behind over the same period, with GVA per head 75.2 per cent and 75.9 per cent of the UK average respectively. Net migration is now into, rather than out of, Scotland and unemployment at the latest count was fractionally below the UK average. This turnaround is partly a consequence of the 1960s and 1970s regional policies, including the work of Scottish Enterprise and Highlands and Islands Enterprise, but also the remarkable growth in Scotland of the financial services sector and employment across a range of industries associated with the development of North Sea oil and gas, especially in the North East. In addition the decline of the older industries has now reduced them to a size where they are no longer such a drag on the performance of the economy. Scotland is therefore quite a wealthy country, whether compared with the rest of the United Kingdom or internationally, because the United Kingdom itself is one of the wealthier countries in Europe and indeed the world. Alex Salmond has claimed that, if Scotland was independent, it would be the sixth wealthiest country per head, based on OECD statistics. He arrives at this conclusion by adding to Scotland's GDP a Scottish geographical share of the output of the North Sea. This increases Scottish GDP by some 21 per cent and results in Scotland's GDP per head being exceeded in Europe only by Luxembourg, Norway, Switzerland and Monaco. However this should not be accepted without qualification. In the first place, he uses Scotland's share of the North Sea as estimated by Professor Alex Kemp of Aberdeen University, which would give Scotland about 90 per cent of the output and tax revenue. The whole of the UK's offshore area has hitherto been treated for statistical purposes as a separate area and without any divisions. Kemp's estimate is derived by applying the international rules for division of offshore territory between states. It is the best estimate one can get but, as he himself points out, it is not something that is agreed by the rest of the United Kingdom. Negotiations would therefore be needed, as they frequently are between countries, and that may not prove such a simple matter. Secondly, GDP from oil and gas includes the profits of the oil companies and the income of those working offshore. Company profits will be distributed to shareholders, the majority of whom are not resident in Scotland, and some of those working offshore also come from other parts of the UK. All of that would be taken account of were we to have estimates of Gross _National_ Product (GNP), where income paid abroad and income received from abroad are both calculated to give a net figure, but no allowance for this is made in GDP. Unfortunately GNP is much more difficult to estimate and no such estimates have been made for Scotland. The truth of the matter is that Scotland's GDP would, indeed, be some 21 per cent higher, if the output of the North Sea were included but, leaving aside tax revenue, which is dealt with in the next part of this chapter, it would not make much difference to the living standards of people in Scotland. **Table 1** _Gross Value Added in 2011 by Country and Region_ _Source: Office of National Statistics, December 2012_ Nevertheless, the argument for independence on economic grounds is still made. Scotland's growth is compared unfavourably with other countries of similar size, many of which have quite different economic circumstances. It is also compared unfavourably with the UK, where growth of output (as measured by GDP or GVA) has been faster than in Scotland over a long period; but this ignores the fact that it is not the growth of output in aggregate but output per head that is a guide to the wellbeing of the population. Inward migration has been much higher in the south of England than in Scotland and it is therefore not surprising that output in aggregate has risen faster for the UK as a whole than for Scotland. But, at the same time, the gap in output per head has narrowed so that in Scotland it is now almost equal to the UK average, showing that Scotland's position has improved when compared with the UK as a whole. **Does Scotland Pay its Way?** Taxes across the United Kingdom, apart from those that are the responsibility of local authorities, are collected by HM Revenue and Customs on behalf of the Treasury. Apart from local authority taxation, tax rates are the same across all countries and regions of the UK, although this may change when, under the recent Scotland Act, the Scottish government becomes responsible for part of income tax. The amount of revenue raised in the various parts of the UK therefore depends mainly on their respective wealth and level of incomes. Public expenditure, on the other hand, is disbursed without any regard for wealth, incomes or tax revenue of a particular part of the UK, with the aim of giving a broadly comparable level of public service. This ought to be related in some way to need and, in the case of spending programmes such as social protection that are UK wide, this will be the case. Under this system, there is no need to take account of how far revenue raised in any part of the UK covers the public expenditure in that country or region, since the budget is framed for the UK as a whole. It is not easy, therefore, to establish for which regions or countries expenditure is higher than the revenue raised and for which it is lower. Although there have been a number of academic studies that have given estimates, until recently there were no official estimates except for Scotland. The Silk Commission on devolution in Wales and the Northern Ireland executive have, however, now produced figures for their territories and both show much larger fiscal deficits than for Scotland. In Wales, public expenditure per head, though higher than the UK average, is not as high as for Scotland but, reflecting the lower GDP per head, tax revenue is much lower, and, for Northern Ireland, expenditure per head is higher than for Scotland, while revenue per head, as for Wales, is lower. No official estimates have been published for English regions, except for identifiable public expenditure. These show that the northern region of England also had public expenditure per head above the UK average and, since its GDP per head was similar to that of Wales, I would expect it also to have a substantial fiscal deficit. The Scottish Office and, following devolution, the Scottish government have published annual estimates of government expenditure and revenue since 1991 with figures that go back to 1986. It is far from a straightforward task. While the Treasury publishes figures for 'identifiable expenditure' by country and region, this cannot include those items such as defence, foreign embassies and interest on the National Debt for which there is no breakdown. A share of these items can only be allocated using some ratio such as population. The revenue side is even more difficult. Many people living in Scotland and companies operating in Scotland are not taxed in Scotland but in some other part of the UK. The revenue which relates to Scotland is derived from information collected by HM Revenue and Customs but has to be estimated. The resulting figures have been criticised, especially by the SNP in the early years. However they have been much improved, are now the responsibility of the SNP government and give as clear a picture of Scotland's present budgetary position as can be obtained. What they show is that taxation revenue from Scotland is approximately equal to its population share of the UK. This is not surprising given that Scotland's GDP per head is only very slightly below the UK average. But public expenditure per head is over 10 per cent above the average for the UK (see Table 2). It has been above the UK average for many years, certainly going back to the 1960s and, according to my calculations, even earlier. In the 1960s, there was a deliberate decision by the then Conservative government to increase public expenditure in Scotland and in the northeast of England, because of their difficult economic circumstances and need for development. The extent to which Scotland's public expenditure per head has been above the UK average seems to have narrowed over the years, from about 20 per cent above in the 1990s to 15 per cent above in the mid 2000s and 10 per cent above in the latest year. But comparisons are difficult because improvements have been made in the methodology. In earlier years, the comparison was only based on 'identifiable' expenditure but, in recent years, the estimates have included a population share of defence, national debt interest and international services. This larger denominator narrows the gap and makes it difficult to get a continuous series of figures on a consistent basis. Using the Scottish government's figures for identifiable expenditure only, the gap will still appear to be of the order of 14 per cent. Before 1979, Scotland's share of public expenditure was determined as a result of annual discussions between the Secretary of State for Scotland and the Chief Secretary to the Treasury. But, since 1979, it has mainly been determined by the Barnett formula and comes in the form of a block grant. The workings of this formula are obscure to most people. But in fact the Barnett formula is quite simple. It is no more than the application of Scotland's population ratio to that of England to determine any change in public expenditure that Scotland receives when public expenditure increases or decreases in England. As such it was thought by many people, myself included, that it would result in a gradual narrowing of the gap between public expenditure per head in Scotland and the UK average. This has been referred to as the Barnett squeeze. In the event, this has not happened as rapidly as expected. This is due to several factors. First, the formula is only applied to the annual change in public expenditure and this is quite small when compared with the inherited amount from previous years. The formula does not adjust the inherited amount at all, even if there were a decline in the Scottish population. Secondly, it relates only to the part of public spending controlled by the Scottish government and not even to all of that, since expenditure on agriculture is separately determined. The biggest single component of public expenditure in Scotland is social protection, which is the responsibility of the UK government and is not subject to the formula at all. This exceeds spending on health and education combined, the two largest programmes funded by the Scottish government. Thirdly, the formula has, on occasion, been bypassed if there seemed a pressing need to do so – for example, if there was a national negotiation on wages in some sector of public service such as the NHS. The upshot is that Scottish public expenditure per head was still £1,197 per head higher than the UK average in 2011–12. This not only results in expenditure being substantially higher than tax revenue (excluding tax revenue from the North Sea) but is a source of periodic and growing complaint in England, where it is taken to mean that Scotland is subsidised by the UK. Only if public expenditure in the various parts of the UK was seen to be clearly related to need could it be properly defended against such complaints. But no needs assessment has been carried out since a Treasury study in the late 1970s. This was done then in preparation for the devolution scheme in the 1970s, which was never implemented. It appeared to show that, at the time, Scottish public expenditure was indeed higher than an assessment of need would justify. Scotland obviously does have special needs – particularly the higher costs associated with providing services in remote communities with a scattered population and the poor health record and deprivation in some urban areas, especially in the west of Scotland. But incomes in Scotland are now much closer to the UK average than in the 1970s, when the assessment was done, and, although in the absence of an up-to-date needs assessment no firm conclusion can be drawn, it seems unlikely that it would fully justify the level of public expenditure that Scotland currently receives as compared with other parts of the UK. The counterpart of the Calman Commission in Scotland, which led to the enhanced powers for the Scottish government contained in the 2012 Scotland Act, was the Holtham Commission in Wales. Using the formula used for distributing public expenditure in England and applying it to Wales and Scotland, Holtham concluded that Wales, although also receiving public expenditure per head above the UK average, received too small a share of UK public expenditure and Scotland too much. This was largely because Scotland's income per head (as measured by GDP) was much higher than that of Wales, which was well below the UK average. If this were all there was to this subject, one would have to conclude that the government of an independent Scotland, responsible for all taxation and public expenditure, would find itself with a very substantial budget deficit, amounting to 14.6 per cent of GDP in 2011–12 according to the Scottish government's own figures in _Government Expenditure and Revenue Scotland 2011–12_ (GERS). But that takes no account of revenue from North Sea oil which would accrue to an independent Scotland. If the revenue from the geographical share that would belong to Scotland as an independent state is included, following Kemp's analysis, this reduces the deficit to 8.1 per cent of GDP in 2010–11 and to 5.0 per cent in 2011–12 – still high but less than the UK deficit of 7.9 per cent in the same year. **Table 2** _Total Public Expenditure Per Capita – Scotland and UK 2007–08 to 2011–12_ _Source:_ Government Expenditure and Revenue Scotland 2011–12 _, March 2013_ Both of these deficits are, of course, unsustainable. They are a consequence of the financial crash of 2008 and the recession that followed. This caused tax revenue to fall and expenditure on benefits to rise as unemployment increased. The austerity measures imposed by the UK government are intended to get the UK deficit down but the economy has shrunk, partly as a consequence of the austerity, and economic growth has been badly affected, so that the targets for reducing the deficit have become elusive. But how realistic is the Scottish figure for the deficit of 5.0 per cent? It is of course hypothetical since, as part of the UK, Scotland does not have to balance its public revenue and expenditure. Alex Salmond has said that it shows that Scotland is in a stronger financial position than the UK. Is that really so? A number of qualifications have to be made. In the first place, it is still a deficit and a deficit that is unsustainable. If Scotland had to balance its own budget, measures would be required to reduce it. Secondly, it is dependent on a geographical share of North Sea revenues accruing to Scotland. A more detailed discussion of the importance of North Sea oil will be found in Chapter 7. Suffice it to say here that there are many uncertainties. The Scottish government's assumptions about the North Sea, as explained earlier, use the geographical share of the North Sea based on the median line as estimated by Professor Alex Kemp but, as he himself said in evidence to the House of Commons Committee on Energy and Climate Change, the median line would be taken as the starting point and negotiations would follow, as they have done with other countries bordering the British part of the sea. All this would take time and could involve arbitration. Whatever the outcome of such negotiations, the revenues from the North Sea are of course substantial but they are also very volatile, depending on the output from the North Sea in any one year, on the price of oil and on the profits made by the oil companies. They have varied from about £1 billion a year to over £12 billion. At their peak in the early 1980s, they were very large indeed whereas, from the mid 1980s, when the price fell sharply, they were much reduced and, in the early 1990s, would have been insufficient, had they accrued to Scotland, to cover the fiscal deficit. Even over the last three years, they have shown great volatility from £12.9 billion in 2008–09, falling to £6.5 billion in 2009–10 and rising again to £8.8 billion in 2010–11 and £11.3 billion in 2011–12. For the future, one can only speculate. Oil production peaked in 1999 and, although it is expected to remain substantial for many years, it is now well below its peak level and expected gradually to decline. Revenue depends of course not just on output but also on the price and prices have proved very volatile. For the future, the outlook for prices is particularly uncertain – on the one hand, the rapid development of countries such as China and India may push prices up but, on the other, the exploitation of shale gas, which is now a major factor in the United States and may become one in Europe, could keep them down. Even if, as many expect, prices stay high, profitability may fall as companies exploit more marginal fields and the costs mount of removing structures from fields that have ceased production. These various factors are discussed in Chapter 7 but they have led the Office for Budget Responsibility to forecast quite a steep fall in tax revenues for the years ahead. It is the stated policy of the present Scottish government that, when conditions allow, the North Sea revenues, or at least a proportion of them, would be paid into a special fund. In this, they are influenced by the example of Norway, which set up such a fund in 1990. This too is discussed in Chapter 7. Given the likely variability in revenue from taxation on oil, putting the proceeds into a special fund would mean that its volatility would not affect the annual budget. It also makes sense because using oil revenues to finance ordinary public expenditure amounts to running down a capital asset to finance current spending. But, while paying the North Sea revenues into such a fund would be very desirable, the government of an independent Scotland could not do without this revenue to finance its budget, so long as the balance between expenditure and revenue remained as it is now. Setting North Sea revenue aside for a special fund would therefore only mean that even more draconian steps would have to be taken to eliminate the budget deficit. In the longer run, the situation may be different – one would hope so – but this would require quite a transformation in the Scottish economy, either by reducing the need for such a high level of public expenditure or somehow increasing other tax receipts through economic growth. There is a further uncertainty over the balance in what would become the budget of an independent Scotland. The figures in _Government Expenditure and Revenue Scotland 2011–2012_ allocate the UK's interest payments on the National Debt on a per capita basis. For an independent Scotland, the National Debt would first have to be split with the rest of the UK. It could be done on a per capita basis but this might be resisted by the UK government on the grounds that, as Scotland's GDP was increased by the addition of output from the North Sea, it would be reasonable to allocate the National Debt by the share of GDP. That would make it some 21 per cent higher than a per capita allocation. The GERS estimate for interest on the National Debt in 2011–12 was £4,072 million but, if it was split by GDP, including the North Sea share, it would raise the interest cost to around £4,930 million. There would also be uncertainty over the rate of interest that a Scottish government would have to pay. It probably would not be, as assumed in GERS, the same rate of interest as for the UK. Theoretically, the rate of interest on Scottish debt might be either higher or lower than for the rest of the UK. But the rate on UK debt is currently at a historic low and there must be doubt over whether this would be matched for Scottish debt, unless agreement was reached to issue common sterling bonds for both countries. That would require stringent conditions on fiscal policy to be met that satisfied both countries. Otherwise, as a newly independent country, Scotland would have to establish its credibility as a borrower, not only with the rating agencies but also with potential lenders. Even a one per cent addition to the rate of interest paid on new borrowing would significantly increase the cost to Scotland. A number of factors are important here – not least whether or not Scotland has its own currency and whether, if it continues to use sterling, there is a credible lender of last resort. These issues are dealt with in a later chapter. The consequence of all these reservations is that it would be unwise to assume, in the event of Scotland becoming independent, that its deficit would actually turn out to be as is given in the GERS estimate. It could scarcely be lower but it might easily be higher. It would be wrong to leave this subject, however, without considering what might happen in future under the present arrangements, whether or not there is an enhanced degree of devolution. Quite apart from the view expressed by the Holtham Commission, there has been an increasingly strong view in England that, under the Barnett formula arrangements, Scotland receives a more generous share of public expenditure than is justified. Sooner or later this may lead to some action by a future UK government. This is counterbalanced by a fear in Scotland that, since the Barnett formula is only a population ratio, it will eventually result in the squeeze that was earlier expected. As already explained, it is impossible to tell whether the present level of public expenditure, and in particular the extent to which it is above the UK average, is justified in the absence of a proper needs assessment. So far, pressure to address this issue has been ignored, with the government saying it has no plans to alter the formula. But a time may well come when this line can no longer be sustained and a revised system is introduced that allocates expenditure more closely in relation to need. If that happens, it should be based on a full needs assessment across all the regions and countries of the UK and carried out by an independent body, acceptable both to the three devolved administrations and to the government of the UK. Scotland would very probably then find itself required to reduce its public spending. I have always taken the view that, sooner or later, this would be inevitable but that the adjustment should be planned over a long period and at a time when the economy was buoyant. It would clearly be painful and would have major political consequences if attempted in circumstances such as the present. Actually, since Scotland's population is only about 8.4 per cent of the total population of the UK, a redistribution of spending between the four countries of the UK to accord with a needs assessment would make little difference to England and would scarcely be noticed by the average voter. But that would certainly not be the case in Scotland. It may be, therefore, that the present arrangements will endure for a considerable time, simply because the UK government might not think it worth the hassle of making a change or of incurring political problems in Scotland, when the constitutional issue is in the minds of the electorate. What this means, however, is that, whether Scotland remains part of the UK under any scheme of devolution or becomes independent, there is likely to be pressure on its level of public spending. Independence would involve uncertainty both over the level and volatility of oil revenues, for which the prudent policy would suggest that relying on them to balance the budget should be avoided and a part of them at least paid into a special fund. Remaining a devolved part of the UK, on the other hand, is likely to mean that eventually the level of expenditure will have to be justified through a needs assessment. The crucial difference is that, as an independent state, Scotland would be entirely responsible for its budget from the time that it became independent and would have to live within its means. But, as parts of a larger state, the revenue and expenditure of the individual countries and regions that form the UK would not need to balance. With broadly comparable taxes, the richer areas would contribute more than those that were poorer and expenditure would not be related to the revenue of a particular country or region but to what is required to provide a comparable level of public services. #### 2 #### **Devo-Max, Devo-Plus and the Status Quo** The independence debate has seen the publication of several schemes that could have formed the basis of a third option in the 2014 referendum to give greater devolution. It is commonly said that more devolution is what the majority of Scots would vote for, had the option been available. Indeed, an Ipos/MORI poll conducted in June 2012 found that 41 per cent of those responding favoured Scotland remaining part of the UK but with increased devolution, 29 per cent favoured Scotland remaining part of the UK with the same powers as at present and 27 per cent wanted Scotland to become a fully independent country. The referendum, however, will be a straight choice between independence and the status quo, with no third option. What might such an option have amounted to? It is worth considering this, even if it is not an option in the referendum, because, in recognition of popular pressure, some members of all three unionist parties are working on options for greater devolution, which may lead to implementation after the referendum. At present, neither the details nor the implications of a third option are well understood; nor are many people yet fully aware of what the status quo would amount to because the Scotland Act 2012 has not yet taken effect and will take some time to do so. The status quo does not therefore mean continuing with devolution as it has been since 1999. That is no longer possible. **The StatusQuo** If independence is rejected in the referendum and the UK government brings forward no further proposals to enhance devolution, the Scotland Act 2012 will form the basis of the system of government in Scotland. This will increase the powers of the Scottish Parliament as set out in the UK government's White Paper 'Strengthening Scotland's Future', which closely followed the recommendations of the Calman Commission on Scottish devolution set up by the three unionist parties in the Scottish Parliament. Much of the Act is concerned with working arrangements between the two governments but the most important provisions, and those that concern us here, are those that are designed to give greater responsibility to the Scottish Parliament for raising revenue. It has been a major criticism of devolution since 1999 that the Scottish Parliament had responsibility for a large part of public expenditure in Scotland but very little for raising the revenue to finance it. That resulted, it was argued, in insufficient accountability for the spending decisions that Parliament made. Under the system that has applied since 1999, the only taxes for which there is any responsibility in Scotland are council tax and business rates. The Act setting up the Parliament gave power to vary the standard rate of Income Tax either up or down by 3 pence in the pound but this power was never used. As a result, only 14 per cent of expenditure for which responsibility lies in Scotland is financed by taxes set in Scotland. The principal change is that, in future, the Scottish Parliament will be required to set a Scottish rate of income tax each year to replace part of the UK income tax. From April 2016, the UK government will reduce the main UK rates of income tax in Scotland by 10 pence. The block grant will be reduced by a similar amount to compensate, leaving the Scottish Parliament to determine what rate of income tax to levy, in place of the 10 pence, to finance its expenditure. Responsibility for the structure of tax rates will remain with Westminster but, if changes are made by the UK Parliament to the structure of income tax rates, a principle of 'no detriment' will apply. This would result in compensating changes to the block grant to ensure that Scottish government revenue is not affected. This power over income tax represents a large flow of income – if the Scottish tax rate were 10 pence in the pound, it would raise £4,500 million or 17 per cent of the Scottish budget. The Scottish rate of tax will apply to all those defined as Scottish taxpayers. This includes those resident in Scotland and those whose principal connection with the UK is with Scotland. The Calman Commission found that the cost of applying the Scottish rate of tax to income from savings and distributions would be prohibitive and recommended instead that half of the tax revenue from this income should be assigned to the Scottish government. The White Paper accepts that applying the Scottish rate of tax to income from these sources is impractical but argues that assigning tax revenues does not enhance accountability. For this reason, neither assignment nor power to alter the rates on income from savings and distributions were included in the Act. However, in addition to a share of income tax, the Act gives the Scottish Parliament complete responsibility for stamp duty tax on land and property (but not on documents or stock exchange transactions) and for tax on landfill. The revenue from these two taxes, however, is relatively modest compared with income tax. The Calman Commission recommended devolution of two further taxes – the tax on air passengers and that on aggregates. The revenue from them would also have been fairly modest but they are not included in the Scotland Act 2012; the former is presently being reviewed and the latter is subject to legal challenge in the European Courts. However, the Act also gives the Scottish government power to levy any new taxes, subject to approval by both the Scottish and UK Parliaments. The effect of devolving the three taxes in the Scotland Act, together with the responsibility that already exists for council tax and business rates, is to increase to about 35 per cent the share of budget revenue for which the Scottish Parliament and local authorities would be responsible. The White Paper argues that the revenue from these devolved taxes would finance a similar share of the Scottish government's budget to that of the devolved legislatures in Belgium, Italy, Spain and Australia. In addition to these tax powers, the Act gives the Scottish Parliament substantial new powers to borrow. Under the arrangements that have applied since 1999, Scottish Ministers have had only limited power to borrow for short-term current spending and this power was never used. In future, because income from taxation is less predictable than from the UK block grant, the new arrangements involve a degree of risk that has not hitherto existed. To allow for temporary shortfalls resulting from this, as well as deviations between forecast revenues and expenditure, Scottish Ministers are to be given power to borrow up to £500 million for cumulative current debt. In addition, they will have power to borrow up to 10 per cent of the capital budget in any one year, with a limit of £2.2 billion on the total stock of borrowing for capital investment. This regime would reduce the share of public expenditure financed by the block grant to 65 per cent. At present, as Chapter 1 explained, this grant is determined by the Barnett formula and has attracted much criticism, especially in England. But, apart from reducing the share of public expenditure that it would finance, there are no proposals to change it. While these changes in taxation and borrowing will increase the accountability of the Scottish Parliament, they do not do anything to give the Scottish Parliament power and responsibility over macroeconomic policy. Indeed, the White Paper explicitly reserves this to Westminster. That does not mean, of course, that the Scottish government cannot adopt policies that improve the performance of the economy. Ways in which Scotland's economic growth might be improved with the Scottish government's existing responsibilities are discussed in the next chapter. Suffice it to say here that those who look for some independence in macroeconomic policy must accept that responsibility over demand management through monetary, fiscal and exchange rate policies must inevitably rest with the state and, in Scotland's case, even with devolution, that state is still the UK. **Devo-Max** Devo-Max has never been very clearly defined but it presumably means almost total fiscal separation of Scotland from the rest of the UK. Contributions would still be required to meet the costs of common services such as the royal family, defence, servicing the national debt and foreign embassies; monetary union with the rest of the UK would continue and foreign exchange reserves would be held for the UK as a whole. The main feature of such an arrangement would be that there would be no social pact. If Scotland were wealthier than other parts of the UK, it would not be expected to contribute support to them and, if Scotland was poorer, it could not expect any help from them. No attempt would be made to equalise social provision, and welfare benefits, including State Pensions, might be at different rates from their equivalents elsewhere in the UK. It is not easy to find examples of this kind of arrangement in other countries. The Campbell Committee said it was not aware of any. The case that seems to come nearest to it is that of the Basque country and Navarra in Spain and this is referred to by the Scottish government. The Basque country has a higher GDP per head than the Spanish average and has a population of only 2 million, 5 per cent of the Spanish total. In a recent paper for the David Hume Institute, César Colino argued that this system has been profitable for the Basque country because of its relative wealth. Unsurprisingly, it has attractions for the areas that are richer than the rest of the country but is considered unjust by the others. Even here, however, there is not complete fiscal autonomy. In accordance with EU rules, there can be no separate rate of VAT and the Spanish state retains responsibilities for social security, justice, defence, foreign affairs, transfers to the EU, macroeconomic policy and regulation of the financial sector. A contribution for these central services is paid by the Basque country to the Spanish state. Indeed even with this large amount of devolution, 50 per cent of Basque public expenditure, mainly for State Pensions and unemployment benefits, remains the responsibility of the central government which also raises 40 per cent of the public revenues. The Basque country therefore remains subject to fiscal decisions made by central government, including the policies to reduce the Spanish budget deficit. The Spanish government has also had to defend its fiscal arrangements for the Basque country against appeals from the European Commission at the European Court of Justice. Could such a system work in Scotland? As was shown in the last chapter, public spending per head is some 10 per cent above the UK average, while revenues, excluding oil and gas, are no more than equal to the average. So, unless the Scottish government received a geographical share of oil and gas revenues, there would have to be some sharp cuts in public expenditure. The geographical share of North Sea revenue would approximately cover the higher level of spending but, so long as Scotland remained a part of the UK, the rest of the UK might see it as unreasonable to give Scotland so much of the North Sea revenue and resist any change. At present the offshore area is not divided between different countries in the UK but is treated as a resource for the benefit of the whole state. The UK government would probably want that to continue. Under Devo-Max there could still be no separate rate of VAT and welfare and social security would be problematic (a discussion of welfare devolution is in Chapter 8). There could also be a strong political reaction from any part of the UK that felt it was disadvantaged by the financial arrangements for Scotland. Indeed, it could well be that, rather than accept such an arrangement, the rest of the UK might prefer to let Scotland become an independent state. Some experts have argued that fiscal independence would encourage the Scottish government to put a greater emphasis on economic growth, so that its economy performed better. Professors Andrew Hughes Hallett and Drew Scott were subjected to close questioning on this by a committee of the Scottish Parliament in early 2011 after asserting that there was evidence of this from other parts of the world. But the evidence is not very convincing and, in Scotland's case, politicians of all parties share a commitment to try to improve the country's rate of economic growth. If they were aware of measures that would improve the country's performance, they should already be adopting them. **Devo-Plus** Several suggestions have been put forward for giving Scotland more devolution than will be provided by the Scotland Act 2012 but not going as far as Devo-Max. Indeed, there have been so many proposals that many people may find them confusing. The following paragraphs outline and discuss three such schemes from: • an interparty group of Liberal Democrat, Labour and Conservative MSPs chaired by Jeremy Purvis and published by Reform Scotland; • a committee set up by the Liberal Democrats, chaired by Sir Menzies Campbell; • a report for the Institute of Public Policy Research (IPPR) by Professor Alan Trench. This last, which was only published as this chapter was being written, seeks to outline a system that could be applicable to all three devolved administrations in the UK and contains the most comprehensive discussion of what is possible. All of these see major advantage for Scotland in remaining part of the UK but consider that there is a case for additional fiscal powers to make devolution more acceptable. Both the Purvis group and the Campbell Committee would like the Scottish Parliament to be entrenched by legislation, so that it could only be dissolved if it agreed, and the UK Parliament's power to legislate on devolved matters removed. The Campbell Committee would see this as a step towards a proper federal constitution for the UK. The Purvis group's proposals, which are in three stages, go much further than the others and raise some of the same difficulties as were noted with Devo-Max. All three sets of proposals include complete devolution of income tax but the Campbell Committee would retain the same system of allowances and reliefs throughout the UK and does not consider that taxation from savings and investments could be devolved. It suggests assignment of the revenue instead. Purvis proposes devolution of corporation tax by 2020, something the SNP Scottish government would also like; the Campbell Committee, on the other hand, do not consider it an appropriate tax to devolve; and Trench outlines the substantial difficulties for both companies and tax authorities. He refers to the dangers of companies shifting their profits to the part of the UK with the lowest tax and concludes that it would only be possible if profits were allocated between the constituent countries and regions of the UK on the basis of payroll. Only the Purvis group proposes eventual devolution of a geographical share of North Sea revenues. The Campbell Committee argue that the whole of Britain's offshore area should be under a single regime and not divided, while Trench suggests that a population share of the revenues could be allocated to the Scottish government. None of the three propose devolution of VAT, as different rates of this tax within a single member state are not permitted under EU rules, but Trench suggests that the revenue (apart from the contribution that goes as revenue to the EU) could be assigned to the devolved administrations. None of the schemes propose devolving responsibility for national insurance. This accords with their views on welfare expenditure. Although the Purvis group would like the Scottish government to be given a larger role in welfare, all three accept that the bulk of welfare expenditure, including the State Pension, should remain with the UK government. In the end, taking account also of the other smaller taxes it wishes to see devolved, the Purvis group's proposals would result in almost all of the expenditure of the Scottish government and local authorities eventually being covered by taxes raised in Scotland. The other two sets of proposals would result in around 55 per cent being financed by taxes raised in Scotland; they would therefore need to be supplemented by a significant but smaller block grant. Both the Campbell and Trench proposals recognise that the block grant, as determined by the Barnett formula, is no longer acceptable to the whole UK and that it should move to a system based on an assessment of need. This would still give Scotland a bigger grant than a straight equalisation of fiscal revenue, such as applies in some federal countries, but the adjustment would be likely to involve a significant cut. The assessment should be carried out for all four countries of the UK and, ideally, for the English regions as well, and it is important that it should be done by an independent body in which all the administrations have confidence. **Assessment of Devo-Plus** How realistic are these proposals? It would certainly be possible to go beyond the changes made in the Scotland Act 2012. The Scottish Parliament could be made responsible for a greater share of income tax than the 10 pence envisaged under the Act. The case against this, as the Calman report argues, is that it could be unwise to have the Scottish government too heavily dependent on one tax, the proceeds of which could be volatile. Moreover, if the UK government had no locus at all in setting income tax in Scotland, it would have to rely on other taxes – particularly VAT – to provide for servicing the UK national debt, for defence and for dealing with emergencies such as arose in recent years to support the banks. Such expenditure could be substantial and subject to variation, as unexpected needs arise. These considerations point to some share of income tax remaining with the UK Parliament. The Purvis group envisages the eventual devolution of fuel duty and excise duties but this is not proposed by the Campbell Committee and Trench only considers it for duties on alcohol and tobacco. But he shows there would be considerable difficulty even with them because they are levied not at the point of sale but of production or import. They would probably have to be replaced by a new tax altogether and, if it was a sales tax, that might conflict with EU rules on VAT. If the issue of accountability is a major concern, as I believe it to be, it would be possible to follow the practice in some other countries and assign the proceeds of VAT (as proposed by Trench) and some of the smaller taxes to the Scottish Parliament but without freedom to alter tax rates. Some people regard tax assignment as pointless if tax rates cannot be altered. But it would tie Scottish public expenditure more closely to the revenue actually generated in Scotland, enable the block grant to be much smaller and perhaps give less scope to taxpayers elsewhere in the UK to complain about unfair funding for Scotland. And, if the Scottish government was able through its policies to encourage the growth of the economy, some benefit from that would accrue to it through higher tax revenue. Corporation tax is a contentious issue. The SNP government has made it plain that it would like this tax devolved. There are a number of countries – Switzerland is an example – where corporate tax rates are set by the regions (in Switzerland's case, the Cantons). Following the judgment of the European Court of Justice in the Azores case, the EU only permits different rates of corporation tax within one member state if the region in which the tax is lower is not subsidised for this purpose by the rest of the state. Otherwise it would be regarded as a state aid and subject to the competition rules on state aids. The Holtham Commission on a funding settlement for Wales has proposed a rebate or a lower rate of tax where levels of GDP are well below the average of the state. That would make it part of regional policy to encourage investment in poorer areas, even if it had to be financed by the region itself. Since Scotland's GDP per head is very close to the UK average, this would not apply, even if such a scheme were eventually implemented elsewhere in the UK. In a UK context, I have always regarded devolution of corporation tax rates as raising major difficulties. The strongest case is that made for Northern Ireland, where it can be argued there is a competitive disadvantage because the standard rate of corporation tax in the neighbouring Irish Republic is only 12.5 per cent. The SNP government's desire to have control of this tax seems to stem from the Irish Republic's success in using it to attract inward investment but it has also resulted in companies simply basing their registered offices in Ireland (the so-called 'brass plate effect'). Even in such cases, of course, Ireland has still benefitted from tax on the revenue declared at the registered office. If Scotland had ambitions to follow this example while still part of the UK, it would help to attract investment to Scotland so long as the difference in tax rates was substantial but much of this might be at the expense of other parts of the UK and would be particularly resented in Wales, the north of England or other regions, where GDP per head is lower than in Scotland. They would probably make a case for equal treatment. Even then, it could produce major distortions. For these reasons, I would expect it to be strongly resisted by the UK government. Moreover, the Scottish government already has power over business rates, which yield almost £2 billion a year and can be altered as it thinks fit. A lower rate of corporation tax would reduce revenue for the Scottish government, which Trench estimates at a loss of £1,724 million a year, if it were cut to the Irish Republic's rate. Unless it provided such a stimulus to the economy that it made up for the loss, this would be a problem for the government and, even if there was a significant stimulus, it would take years to make up for the lost revenue. Oil and gas revenues were proposed for devolution by the Purvis group for the third stage of its scheme. But, as argued already in relation to Devo-Max, so long as Scotland remained part of the UK, there would be no formal need for any division of the offshore area and I suspect that the UK government would want to continue to treat it as a resource for the whole UK. I would therefore expect strong resistance from the UK government. **Conclusion** Devo-Max offers the opportunity for greater independence in economic policy but it would probably provoke major resistance from other parts of the UK. This would be especially so if policies were seen as unfair or discriminatory. But the important point about Devo-Max is that, like independence, it would end the social pact with the rest of the UK whereby there is pooling of resources to achieve equality of social provision. Scotland would also have serious difficulty in funding its above-average level of public expenditure unless it received the geographical share of tax revenues from the North Sea oil that it could expect as an independent state. Such terms seem unlikely to be acceptable to a UK government so long as Scotland was part of the UK. Differences in corporation tax and some other taxes as well as in levels of benefit would also create distortions and anomalies, which could provoke strong reaction from other parts of the UK. Devo-Max is an attempt to have complete financial independence, apart from monetary policy, while remaining in the UK. That is unprecedented elsewhere, even in the Basque country which comes closest to what the proponents of Devo-Max might want. For Scotland, if it were implemented, it would involve many of the risks of independence without either the UK safety net or the policy freedom of independence. The Devo-Plus schemes all improve accountability. The proposals of the Purvis group raise some of the same difficulties as Devo-Max over the devolution of corporation tax and Scotland's geographical share of revenues from the North Sea. The proposals of the Campbell Committee and Alan Trench's report for IPPR, however, do not raise these problems and they seem to me to be workable. They would result in more of the Scottish government's public expenditure being financed by Scottish tax revenues and less by block grant than with the status quo after implementation of the 2012 Scotland Act. _Scottish Revenue in 2011–12 from Taxes Proposed for Devolution_ | | **£million** ---|---|--- Income tax (three quarters of total) | | 8,092 VAT (minus EU contribution) | | 9,269 Insurance premium tax | | 251 Aggregate levy | | 52 Landfill tax | | 97 Stamp duty land tax | | 275 Air passenger duty | | 213 Council tax | | 1,987 Business rates | | 1,933 Total | | 22,169 | | Total Scottish government and local authority expenditure | | 38,624 Taxation revenue as a % of expenditure | | 57.4 _Source:_ Government Expenditure and Revenue Scotland 2011–12 _, March 2013_ The table above sets out a suggested scheme for increased devolution of taxes that seems to me realistic and would have a chance of being acceptable. It builds on the Devo-Plus proposals of the Campbell Committee and Alan Trench but does not accept all their suggestions. The main difference is that only three quarters of income tax revenue would be devolved, on the grounds that to cope with unexpected demands and emergencies the UK government should retain some power to levy such a major tax. This introduces a complication, however, because if a Scottish government wanted to alter the structure of income tax, which I think it should have power to do, the UK component would have to be separated and possibly form a separate tax. This would be necessary if the Scottish government had a different view from the UK government over which group in society should bear the greatest tax burden. I would assign the revenue of VAT, subject only to the deduction for the required funding for the EU, while accepting that there would be no power to alter rates. This is because I attach importance to making revenue from taxes in Scotland cover as much of public expenditure as practicable. It would also enable the Scottish government to benefit from any increased revenue as a result of improved economic performance. Together with the other smaller taxes in the Scotland Act and the air passenger duty and aggregates levy, which were proposed by Calman but not so far implemented, the revenue from these taxes would cover over half the expenditure of the Scottish government and local authorities; the block grant would be substantially reduced and what remained would have to be gradually adjusted to accord with a proper needs assessment. Such an assessment, however, should be undertaken for all the countries and regions of the UK to ensure that the distribution of expenditure between them was fair; and moving to it should be in accordance with a timetable agreed by all three devolved administrations and the government of the UK. In considering the various options outlined in this chapter, it is important to be clear what the purpose of more devolution is. All of the schemes discussed would result in a greater part of Scottish public expenditure being financed by taxes raised in Scotland. But what is it that the advocates of more devolution actually want? Probably more control over the policies that are applied to Scotland; and, while the increased tax-raising powers under these schemes make possible a somewhat different spread of taxation according to income, they do not perhaps do as much to make possible the substantial differences in policy that some people may wish. The scope for increased control that independence would give is discussed in the next chapter. #### 3 #### **The Scope for an Independent Economic Policy** The last chapter discussed the scope for greater economic powers for the Scottish Parliament under various schemes for devolution. These could result in Scotland raising more of its own tax revenue, thereby resulting in increased accountability, but the scope for independence in managing the economy would be limited. This is especially so as fiscal, monetary and exchange rate policies would all be reserved to the UK government. Schemes for greater devolution might give the Scottish government greater power to borrow than it has now and there could be some differences in income tax rates and other devolved taxes. But the essence of macroeconomic policy is the ability to budget for a surplus or a deficit, so as to control inflation and stimulate economic growth. Policy to deal with these matters would inevitably remain with the UK government. With independence these constraints theoretically disappear and Scottish Ministers have claimed that only independence would give them the levers they need to manage the economy in the interests of the Scottish people. But, in practical terms, no government can pursue policies regardless of what its neighbours and trading partners are doing. All economies nowadays are interdependent, as can readily be seen from the effect on the UK of policies in the United States and the European Union. This would be especially so for Scotland, given its relatively small size, the fact that the rest of the UK would be its dominant trading partner and that freedom of movement of both capital and labour throughout the single market of the present UK would continue. **Fiscal Policy** An independent Scotland would raise all of its own tax revenue, including personal taxation, VAT, excise duties and corporation tax; it would have to decide what rates should be set for each tax. It would also be responsible for all of its public expenditure, including defence, foreign embassies and interest on the national debt, items that are at present dealt with centrally by the UK. But there would still be constraints. Differences in VAT or in excise duties, although allowed for individual states under EU rules, could encourage trading across the border, if they were substantial, as happens now with alcohol between Britain and Continental countries. If personal taxation was different from the rest of the UK, there would be a risk that some people would vote with their feet and move either to or from Scotland, though I suspect that the difference would have to be significant, certainly larger than present differences in council tax, for this to become an issue. The UK is not only a single market for goods and capital but the integration of the labour market also makes it very easy for people to move to wherever gives them most opportunity. Obviously there is no language barrier as applies between other EU countries. Differences in rates of corporation tax exist between EU countries. But, if the Scottish government tried, as is its stated aim, to reduce the rate of tax to a very low level, it could be seen as an attempt not just to help companies in Scotland but to attract economic activity that might otherwise go elsewhere in the UK or to other member states of the EU. That would raise problems both with the European Union and the remainder of the UK. As it is, several EU countries have taken issue with Ireland's low 12.5 per cent rate of corporation tax, notably at the time of the Irish financial bailout, arguing that it was unacceptably distorting. While Ireland has, so far, managed to resist this pressure, it is unlikely that a newly independent Scotland, seeking to establish itself within the EU, would be able to do so. Moreover, as a response to the difficulties in the eurozone, proposals are being developed that would include a banking union and much greater integration of fiscal policy for countries in the zone. This will apparently give members of the zone some oversight of each other's budgets, with the intention of ensuring that they all pursue policies of financial rectitude. It is not clear at present how far this will go or whether it will be acceptable to member states. But it is likely to increase pressure for greater harmonisation of taxes, including corporation tax, even for countries that are not in the eurozone but are part of the EU. **The Importance of North Sea Oil Revenues** The inclusion of the North Sea on a geographical basis makes a substantial difference to the size of the Scottish economy. As we have seen in Chapter 1, it would immediately add about 21 per cent to Scotland's GDP. It would also provide a large flow of taxation revenue and such revenue would be a much larger component of a Scottish government's budget than it has been of the budget of the UK. Its contribution to the balance of payments would also be very important and, if there were a separate Scottish currency, its value would be heavily influenced by the price and volume of oil produced. The problem, as we also saw in Chapter 1, is that the value of the oil and gas produced has fluctuated over the years, not just because output has varied but even more because prices have been volatile. The implications of North Sea oil revenues for the balance of payments and the exchange rate are considerable and could be the opposite of those for the government's budget. The higher the revenues, the more the Scottish government's budget would gain. But the balance of payments on foreign transactions is a different matter. There are no estimates of what Scotland's balance of payments, in the event of independence, might be. But a high oil price and, consequently, high revenues carry the danger that, by generating a large balance of payments surplus, they could push the exchange rate up, if Scotland had its own currency, thereby threatening damage to the non-oil economy. This is not a minor concern. The Dutch economy suffered in the 1970s when discoveries of natural gas threatened to damage non-gas related activities and this became known as 'the Dutch disease'. In the same way, the massive growth in UK oil revenues in the early 1980s was one of the factors that brought about a very sharp rise in the sterling exchange rate, which, in turn, was a major factor in the recession of those years and caused the loss of a lot of manufacturing industry. The Scottish government would therefore have to stand ready to counteract this effect by investing abroad, as Norway has done with its oil fund, or by some other means, if the rest of the economy was being affected. As part of the UK, Scotland has been within a state where revenue and expenditure in the individual territories and regions did not need to balance, as explained in Chapter 1. Policy has been based on the assumption that the stronger parts of the country help the weaker, thereby enabling a comparable standard of public services to be maintained, which some of them would otherwise be unable to afford. There have also been centrally funded regional development policies to help growth in areas in need of development or to replace activities that have declined. Such policies have varied greatly in strength, depending on the philosophy of the government of the time. But Scotland has, over the last half century at least, been one of the parts of the UK that has benefitted. Within federal states, similar transfer mechanisms and development policies usually exist. The Scottish electorate would therefore have to decide whether it wanted to retain the safety net of being part of a larger country, where there is a generally comparable standard of public services, regardless of what can be afforded from local taxation at any particular time or whether, for the sake of being independent, it will take the risk that there may be times when, perhaps because of a drop in oil revenues or some other reason, taxation revenue falls and public expenditure has to be cut to match it. Those arguing for independence or some form of complete fiscal autonomy for Scotland, such as Devo-Max, need to face up to this issue, as their policies would end such inter-regional transfers. **Monetary Policy and the Choice of Currency** The Scottish government has said that it would be its intention to keep sterling as the national currency following independence and that the monetary union of the UK would therefore continue. This is also the recommendation of the 2013 report of the Scottish government's Fiscal Commission. But this leaves a host of important questions unanswered. Would the Scottish government sell its own bonds on the market and, if so, what interest rate might they have to pay? How much influence would the Scottish government have on the Bank of England's monetary policy? On what conditions would the Bank of England be prepared to continue as lender of last resort for Scotland? It would seem that no discussions about this with the Bank have so far taken place. If the Scottish government wanted it to perform this role, the Bank's agreement and that of the government of the remainder of the UK would have to be sought. A Scottish request for continued Bank of England support as lender of last resort could meet with refusal, especially if the remainder of the UK was not satisfied with the economic policies being followed by Holyrood. If agreement was reached to enable the Bank of England to act as a central bank for both countries, it might be possible, following the example of the European Central Bank (ECB), to argue for there to be a Scottish member of the Court of the Bank. Scottish Ministers have said they would want Scottish representation on the Bank's Monetary Policy Committee (MPC). That is likely to be more difficult and, again, it might be refused. None of the present members of the MPC or the Court is there to represent a particular territory. If Scotland and the rest of the UK were in monetary union, there might be a Scottish member of the MPC but only so long as he or she had appropriate expertise and was not there just to represent the interests of Scotland. And, as the example of European monetary union shows, even representation on the board of the ECB does not guarantee a monetary policy that suits all members; inevitably the largest economies are those to which the bank has to pay most regard in deciding its policy. In Scotland's case, there would be a huge demographic imbalance with the remainder of the UK having over 91 per cent of the population. Scotland's influence would therefore be bound to be limited. Alternatively, within the monetary union, might there be a Scottish central bank? If so, would it then be able to act as lender of last resort? This might depend on whether Scotland continued to use Bank of England notes or reestablished the separate pound Scots, which could be pegged to be exchangeable at par with sterling. If the European example is a guide, within EMU the individual central banks of member states do not have money-creation powers and therefore cannot act as lenders of last resort for their respective countries. Had they retained their own currencies but linked them to the euro, they might have been able to do so but, depending on the circumstances, their actions could then put the currency link under great pressure, eventually forcing it to break to form a new exchange rate. This is what happened with the regime of fixed exchange rates in the European Exchange Rate Mechanism (ERM), in the years when it preceded the single currency. Here the experience of Ireland is interesting, although the present circumstances of Scotland and those of the then Irish Free State in 1922 are very different. After independence, Ireland retained sterling as its currency and conditions were much as they are in Scotland now – Bank of England notes continued to circulate and the Irish banks issued notes of their own, which were accepted as sterling both by businesses and members of the public and were backed by deposits at the Bank of England. The first Irish Banking Commission set up in 1926 proposed introducing Ireland's own currency notes but emphasised the importance of retaining the 1:1 parity with sterling. These notes were therefore exchangeable at par with Bank of England notes and managed by an Irish Currency Commission. In 1934, there was a second Banking Commission, one of the main recommendations of which was the establishment of a central bank. But Ireland did not actually get a central bank until 1943, following the Central Bank Act of 1942. The remarkable consequence of all this was that, in the absence of any formal agreement with the Bank of England, Ireland was without a lender of last resort for some 21 years. This is particularly surprising, considering that those years included the 1930s depression and the first four years of a world war. Mercifully, this need did not arise as no Irish banks got into trouble during this time and successive governments operated extremely conservative fiscal policies. With much greater speculative activity now a feature of financial markets, it is hard to see an independent Scotland getting away with having no lender of last resort. Nor, one imagines, would an independent Scotland, trying to stimulate its economy and wanting to use fiscal levers for this purpose, be content with an extremely conservative fiscal policy of the kind followed by Ireland after independence. But, with a deficit of 5.0 per cent of GDP in 2011–12 (even including a geographical share of North Sea revenues), there would hardly be scope for an expansionary fiscal policy. Was Ireland right to follow this policy of retaining the link with sterling through thick and thin? Successive experts, including the First Banking Commission and the majority report of the Second, argued strongly for this. But the minority report of the Second Commission disagreed, saying that they could not: acquiesce in the extraordinary view that this country, alone among responsible entities in the world, should not ever have the power to make decisions, and that no apparatus or mechanism for controlling the volume and direction of credit should ever be brought into existence. Nowadays there are some in Ireland who strongly criticise the policies pursued in the early decades after its independence. Conor McCabe, for example, argues forcefully that, given the poor and underdeveloped state of the Irish economy at that time, retaining an overvalued currency, which is what the parity link with sterling implied, was profoundly damaging to the economy and was one of the factors that led to Ireland's relative stagnation during that time. What is clear is that, if an independent Scotland wished to continue to use sterling, there would have to be negotiations with the rest of the UK and the outcome of these negotiations would be crucial to how policy operated. If the Scottish government wanted to borrow using common sterling bonds, like the eurobonds proposed but not yet implemented for the EU, that would imply that they were guaranteed by both the UK and Scottish governments. For this to be acceptable, the rest of the UK and Scotland would have to be satisfied on the sustainability of each other's fiscal policy, just as eurozone countries now see the need for fiscal union to support their monetary union. If, on the other hand, Scotland issued its own bonds to cover any necessary borrowing, these might require a higher interest rate than the bonds of the rest of the UK, until the market was satisfied, as a result of experience, that they were equally safe and fully backed by a lender of last resort. So there is much that would need to be decided and many issues that have not as yet even been discussed. If Scotland does decide to become independent, my own view is that it, like Ireland after 1922, could continue in monetary union with the rest of the UK, if it accepted all the constraints that implied. But these constraints would be considerable, even if the UK government and the Bank of England were receptive to what Scotland wanted. Scotland would have very little influence on monetary policy and fiscal policy would, in effect, be overseen by the rest of the UK. It is interesting to note that, when the Czech and the Slovak Republics split in 1993, before they joined the EU, they intended to maintain monetary union and a common currency at least initially, though with the prospect that they might adopt their own separate currencies eventually. In the event, the monetary union collapsed in less than six weeks, as a large volume of funds flowed from Slovakia into banks in the Czech Republic. Controls on capital movements had to be imposed and the existing Czechoslovak currency was over-stamped by each country to distinguish it, until two new currencies could be introduced. The lesson is that, if the markets think a monetary union will not last, it becomes very difficult and costly to maintain and will, in the end, fail. Nevertheless, continuing the monetary union with the rest of the UK is the preferred policy of the SNP government and is probably how it would start, perhaps for some years. But it is likely that there would be a need for a separate central bank and, if Scotland was a member of the EU, this could be a required condition of its membership. A separate central bank would not mean the end of sterling monetary union – indeed, the eurozone countries all have their own central banks. But, if Scotland's central bank was to be given power to act as lender of last resort – and my view is that it should have such power – it would have to have a separate currency as well. This could be pegged to sterling or indeed the euro, depending on circumstances, just as monetary union continued with Ireland up to 1978. There would be considerable advantages in maintaining the sterling monetary union, as the Scottish government's Fiscal Commission argues, given the closely integrated nature of the Scottish economy with the rest of the present UK and the fact that some financial institutions based in Scotland do most of their business south of the border. But, as Professor John Kay (formerly a member of the First Minister's Council of Economic Advisers) has recently said in a lecture at Glasgow University, the Scottish government's freedom of action to tailor policy to Scottish needs – the economic levers about which politicians so frequently speak – would be so constrained that it might, in the end, induce a Scottish government to create a separate currency. The importance of having a separate note issue, even if pegged to sterling, is that it would make it possible to avoid the type of disaster – stemming from loss of competitiveness – that has affected the countries of southern Europe within the eurozone. Some smaller countries in Europe that still have their own currencies, Denmark being one obvious example, have found that it makes sense to shadow the currency of a larger area, usually the euro. But this still gives them freedom, in extremis, to allow their currencies to be revalued either up or down, should the need arise. They are not locked into a monetary union from which they cannot escape. This may mean that interest rates are slightly higher than for the currency that is being shadowed, because of exchange rate risk, but the kinds of problems that are now so distressingly evident in Spain, Portugal, Italy and Greece could be avoided or at least substantially mitigated. So there would be important decisions to be taken on the currency and on how a Scottish government would intend to manage its budget. We need a clearer statement from the government about the additional levers it seeks and how it would use them. **Economic Policy with Existing Powers** The truth is that there is much that the Scottish government can do with its existing powers to improve the growth of the economy. Assistance for economic development, education and skill training, and infrastructure investment are devolved responsibilities. All of these are of the greatest importance for the performance of the economy. Vocational skill training has, in my view, always been treated as the poor relation of university education and has tended to suffer in consequence, not only in Scotland but in the UK as a whole. Increasing the number of university graduates is important for the economy and is to be greatly welcomed, but the economy also depends on a good supply of school leavers who acquire high quality vocational skills. There seems now to be a tendency to regard qualification in vocational skills as in some ways less important or second best. The Scottish government recently decided to reduce funding for further education colleges while, at the same time, adhering to the policy of free university education for Scottish students at Scottish universities, despite the decisions of the other countries in the UK to charge fees. This decision has raised doubts over whether Holyrood has thought through its priorities on higher education properly. When compared with some other parts of the UK, an important feature of the Scottish economy has been the relatively poor growth from new business start-ups and from small businesses. Scottish Enterprise and the SDA before it have given a lot of attention to trying to foster business start-ups and encouraging the growth of small firms. But it is not easy – a lot of small businesses fail and whether others succeed may depend heavily on the support they get. This was a large part of the rationale for setting up the SDA in 1975, with which I was then involved. It was believed at that time that there was a shortage of equity finance for small businesses. This, it was argued, made them too heavily dependent on bank finance in the form of loans and so gave them insufficient flexibility to survive fluctuations in the market. There have been many improvements to the availability of equity finance since the SDA was set up. But, following the financial crisis of the last few years, funding from the banks has become much harder to obtain, as the priority for them has been to rebuild their balance sheets and increase their capital. This makes it all the more important that the Scottish government, probably through Scottish Enterprise and Highlands and Islands Enterprise, gives close attention to the needs of small businesses, both in the provision of advice and in meeting their financial needs. Many entrepreneurs start businesses nowadays with an eye to an eventual exit, often taking the form of takeover by a larger group. There is nothing wrong with that – indeed, without it, there would be less business formation. What can be more problematic is the takeover of much larger, longer-established UK companies. Obviously there are times when a company is struggling and takeover by a competitor or a large firm is the only way to save the business. Or the company may be stagnating under current management and the shareholders take the view that a change in control is the way to avoid eventual decline. But there are many examples where there has been little or no economic advantage in a change of control. In the UK, the takeover of Cadbury's by the American firm Kraft in 2010 is a case where nothing was to be gained either for the firm or the country by the takeover. Sometimes, fear that they will be taken over prompts the management of a successful firm to try to take over others to prevent it being taken over itself. The clearest examples of this are the two Scottish banks. As Ray Perman's recent book on Bank of Scotland makes clear, the disastrous merger of Bank of Scotland with Halifax largely came about because the management felt they were at risk if they remained independent. In the case of the Royal Bank also, though there were certainly other factors, fear of takeover was probably in the minds of the board in trying to make the bank a bigger and yet bigger business. It is high time something was done about this. Norman Tebbit, when UK Secretary of State for Trade and Industry in the 1980s, removed from the rules for appraising takeovers a clause that required the authorities to take into account regional implications. This was done after a successful defence of the Royal Bank's independence, when a merger was proposed with Standard Chartered Bank in 1981 and the Hongkong and Shanghai Banking Corporation put in a counter bid. In my view, the regional implications need to be restored to the appraisal of such takeovers, not only for Scotland but for other parts of the UK as well. If the Scottish economy is to prosper, companies headquartered in Scotland should be protected from aggressive takeovers carried out only for short-term gain or for the aggrandisement of management. The conclusion must surely be that takeovers are too easy and that, in some cases, they can be seriously damaging. But it is not easy to see how such a change in policy could be effective unless it was done for the UK rather than just for Scotland. In respect of all the issues set out above – vocational training, support and finance for new and small businesses, and takeovers – the Scottish government could learn much from Germany. The German economy has the largest and strongest manufacturing sector in Europe; the training of apprentices is well organised and few school leavers are without either further education or training. The _Mittelstand_ , the small and medium-sized business sector in Germany, has been an outstandingly important feature of the economy and it depends in large degree for its success on its close links with financial institutions. And a longer-term view on the part of both business and finance has resulted in the takeover frenzy that has been such a feature of Britain being much less evident in the German economy. **Could the Austerity of the Last Few Years Have Been Avoided?** What, I suspect, supporters of independence or much greater devolution would really like is a government that has the power to do things differently from the rest of the UK in ways that would improve the quality of their lives. The austerity policies of the present UK government are a case in point. These policies have been controversial and they have resulted in a lot of hardship and unemployment. There can be no dispute about the need for a country to be able to live within its means over the longer term. Neither the UK nor Scotland is doing that at present, as can be seen from their unsustainable budget deficits and the consequent rise in the UK national debt (see Chapter 1). But political independence has not saved other countries from similar trouble. I am among those who think that Chancellor George Osborne's policies have been misguided and needlessly harsh. He has failed to meet his targets either for eliminating the budget deficit before the next election or to have the national debt falling by then. Austerity is now to be extended well beyond the term of the present Westminster Parliament. All of this stems from the banking crisis of 2007/08 (I discuss the difficulties that Scotland might have had in handling that in Chapter 5). The problem now is how to get the country back to a position where it is living within its means. To many people and, it seems, to some politicians, a government deficit is seen as analogous to an individual spending more than he or she earns. The only solution then is to reduce spending so that it no longer exceeds income. But, for a country, the analogy is misleading. Cutting public spending and raising taxes affect the level of activity in the economy. This causes unemployment and expenditure on benefits to rise and taxation revenue to fall. The policy will therefore only succeed if these secondary effects are smaller than the initial gain to the budget's balance from the cut in expenditure or higher taxes. This is what economists call the fiscal multiplier. If the multiplier is low, the secondary effects will be modest and the attempt to get the budget back into balance by cutting expenditure or raising taxes will have a good chance of success. But, if the multiplier is high, the secondary effects will nullify much of what policy is trying to achieve and a downward spiral may develop. This is what seems to be happening now in certain eurozone countries and has happened to some extent in the UK. It seems that there was a serious misjudgement over the size of the fiscal multiplier at the start. Initially, it was thought to be low (about 0.5, meaning that, for a 1 per cent cut in public expenditure or increased tax, there would be a half per cent reduction in GDP growth). But that was based on research done in normal times and more recent research by the International Monetary Fund has concluded that the fiscal multiplier is now much higher (in the range 0.9–1.7). This is because, in normal times, a tightening of fiscal policy can be offset by a relaxation in monetary policy. Also, the earlier calculation assumed that the policy was applied by one country in isolation. In present conditions, with Bank of England interest rates at their lowest ever level, monetary policy cannot be relaxed further and almost all of Europe and the United States are trying to cut their budget deficits simultaneously. As each country does this, imports from other countries are reduced and the consequence of so many countries attempting this together increases the depressive effect. Action taken to reduce the budget deficit has therefore had a much greater impact on growth than anticipated and this, in turn, makes it much more difficult to get the budget into balance. In an important paper, Dawn Holland and Jonathan Portes of the National Institute for Economic and Social Research conclude that the poor growth performance of most EU countries, including the UK, during this recession can be attributed to the attempts at fiscal consolidation. This is because of the spill-over effects from one country to another of the action taken and the inability of monetary policy to compensate. These negative effects have been larger than governments expected. They might have been less if policy had been aimed at measures with a lower multiplier. These might include an increase in income tax, rather than VAT, because it would have more impact on higher income groups, who do not spend so much of their income, and cuts in current public spending being at least partially offset by increased infrastructure investment that can yield a return. The most important lesson, perhaps, is that countries are now so interdependent and the spill-over effects from one country to another so great that the most effective policy would be a programme of actions carefully co-ordinated between countries. It might have resulted in higher borrowing in the short term but the ratio of national debt to GDP would have stopped rising sooner, as the economy's growth began to pick up. What are the implications of all this for an independent Scotland? The fiscal multiplier is not the same for all countries. Small open economies, such as Scotland's, will have a lower multiplier than large more self-sufficient ones, because so much of the depressive effect of cuts in public spending or increases in taxes will hit imports. While the adverse effect on growth of trying to balance its budget would be relatively small for a Scottish government acting on its own, the spill-over effects from and to other countries would be very large. This conclusion is not very surprising. It means that what a Scottish government could do to stave off the adverse effects on its economy of an attempt by the UK and other countries to get their budgets into balance would be very limited. Inevitably the Scottish economy would be very dependent on policies adopted by its trading partners, whatever the constitutional arrangement. That does not leave a Scottish government powerless – it could still, for example, decide its own spending priorities – but its effectiveness would depend on how far it was able to influence others and agree on co-ordinated action. #### 4 #### **Scotland and Europe** Scotland's position in Europe, if it becomes independent, has generated a lot of debate. The SNP government were clearly at fault in their original claim that Scotland would automatically remain a member of the EU. The First Minister gave the impression that his government had received legal advice on the issue but it then transpired it had no such advice. This was a shambles. The UK government, on the other hand, said that Scotland, as a newly independent state, would be outside the EU and would have to apply for membership in the same way as any other candidate state. This was confirmed by José Manuel Barroso, the president of the European Commission, in a letter to the House of Lords Economic Affairs Committee. Lord Kerr of Kinlochard, former head of the Foreign Office, Ambassador and UK Representative to the European Union, warned that Scotland might find it difficult to obtain the same terms and opt-outs that had been available to it as part of the UK. On the other hand, Sir David Edward, the distinguished Scottish former judge at the European Court of Justice, has taken the view that, as there is provision under article 50 of the Treaty of Lisbon for a member state to withdraw from the EU and provides for a process of negotiation and agreement in such circumstances, it would be contrary to the spirit of the Treaties to suddenly treat Scotland as outside the EU, if by popular referendum it chose to become a separate state. If leaving the EU involves negotiation, then Scotland's circumstance must also be the subject of negotiation during the period between the referendum vote and the moment when separation took effect. He goes on to argue that the key is the good faith of other member states (including the United Kingdom until the moment of separation). The result of negotiation could then be treaty amendment, rather than a new Accession Treaty. At the time of writing, the UK government has just published the detailed opinion it has obtained from two distinguished experts in international law, Professors James Crawford of Cambridge and Alan Boyle of Edinburgh University. They acknowledge Sir David's argument but think it more likely that Scotland would be treated much as any other state wanting accession, which would require a Treaty of Accession. There is no precedent in international law. No part of an existing member state has become independent before and then applied for membership in its own right. There is therefore no recognised procedure that meets the case. All these views are, however, those of experts and are based on experience. I claim no legal expertise but I find Sir David's view persuasive. To argue otherwise would mean that, at the time of separation from the rest of the UK, all existing arrangements with the EU including, presumably, grants from the Structural Funds, the Common Agricultural Policy (CAP) payments, provision for Erasmus students and access in Scottish territorial waters for fishermen of other member states would suddenly end. That would not happen even to a country that wished to leave the EU. All are agreed, however, that negotiations would be necessary and that the agreement of all existing 27 member states (or 28, assuming Croatia joins in 2013) would be required. The main difference is in the time it might take and the extent of the upheaval. If Sir David is right, it might be accomplished quite quickly, within the two or more years between the referendum result and actual separation from the rest of the UK. But it would still require all the other member states to agree. The issue is of major importance, especially for business. Many of the inward investment companies that decided to come to Scotland did so because they saw it as a good base from which to serve the large EU market. Scotland had what they needed – good sites for development, a supply of excellent labour, including graduates, and a dependable political and legal environment. Those that came from the United States and Japan also probably found it helpful that the language they needed was English, since that has become the international language for business. If, therefore, access from Scotland to the European single market were now to appear to be at risk, not only would it be hard to attract more investment from abroad, but companies already here might begin to think of moving elsewhere. There are some who have argued that, if Scotland becomes independent, that would mean repealing the 1707 Act of Union and that Scotland and the rest of the UK would then be exactly in the same position as new states. They then assert that, if Scotland has to apply for EU membership as a new state, so would the remainder of the UK. This is given short shrift in the legal opinion. It would be Scotland's decision in a referendum to secede from the UK, not any decision taken by the electorate in the rest of the UK. The rest of the UK would therefore be the continuing state and would inherit all the treaties signed by it before Scotland seceded. It would, however, have to face some, _probably_ slight, adjustment to its terms of EU membership on such matters as the number of MEPs and its budget rebate, as it would no longer be a country of 63 million people. Scotland would be the new state and would have to negotiate with the EU afresh to decide its conditions of membership. This would include its voting rights, its number of MEPs and whether or not it sought the same derogations as the UK. As part of the UK, Scotland has been in the EU for 40 years and there is therefore no question that it satisfies the criteria for membership. However the UK has derogations from the EU treaties enabling it to exclude itself from joining the euro or the Schengen free borders area. Like the rest of the UK, Scotland has also shared in the budgetary rebate negotiated by Mrs Thatcher, as UK Prime Minister, at Fontainebleau in 1984. **Schengen** Securing an opt-out from the Schengen area seems likely to be the least difficult of these problems. The Schengen Agreement was originally independent of the EU but was absorbed into EU law by the Amsterdam Treaty of 1997. It requires members to abolish internal border controls with each other, while strengthening them with non-member states. Its provisions include a common policy on people seeking temporary entry and harmonisation of external border controls. There are cross border police and judicial co-operation. The area includes all EU states except the UK and Ireland, although implementation is not yet effective in some of the newer members that are not yet fully compliant – in Cyprus because of the dispute with the northern Turkish part of the island and in Romania and Bulgaria because of concern over anti-corruption measures and organised crime. However, the Schengen area also includes several countries that are not members of the EU: Norway and Iceland are members stemming from the Nordic Passport Union, which predated Schengen; Switzerland joined in 2008 and Liechtenstein in 2011; and there are no border controls with the three micro-states of Monaco, San Marino and Vatican City. The UK did not join because, as an island, it argued that frontier controls were a better way to control illegal immigration than identity cards, residence permits and registration with the police, which apply in other countries. And Ireland is also outside the area because, since its independence from the UK in 1922, both countries have maintained a common passport-free travel area. How well the control of illegal immigration argument stands up in the light of the UK's experience is perhaps open to question; but, if joining the area involved introduction of identity cards, that is something that the UK, having abandoned a scheme to introduce them, would certainly continue to resist. If Scotland became independent, it would presumably argue that, like Ireland, it was part of a common passport-free travel area with the rest of the UK, with which it also has a unified labour market. It would seem quite unreasonable for a derogation to be resisted when it has been given to the UK and Ireland. But, in the unlikely event that it became a problem, Scotland would then, if it joined the EU, have to install border controls on travel to and from England, a prospect that would alarm many people. **Joining the Euro** Under the Copenhagen criteria which define whether a country is eligible to join the European Union, membership presupposes the candidate country's ability to take on all the obligations of membership which include adherence to the aims of economic and monetary union (EMU). This would include eventually adopting the euro. However, before joining the euro, a state's legislation, for example in relation to its central bank, has to be compatible with EMU and it must also have achieved a high degree of sustainable convergence, as measured by four specified criteria: • achievement of a high degree of price stability, with a rate of inflation close to that of the three best performing countries; • a deficit on the government's budget at or less than 3 per cent of GDP and a debt ratio of less than 60 per cent of GDP or declining so that it is seen to be approaching that level; • ability to keep to the normal exchange rate fluctuation margins of the European Monetary System (EMS) of which it would have to have been a member for two years; • durability of convergence within the EMS, as shown by long-term interest rate levels. There was a fair amount of fudging of these criteria when the eurozone was set up. Several countries, notably Italy and Belgium, did not meet the 60 per cent debt rule but were accepted because they argued that the ratio was falling. Some also had difficulty with the 3 per cent deficit criterion but argued that they had taken action that would enable them to meet it. Greece would not have been admitted if the true state of its finances had been understood. If independence was achieved as a result of the 2014 referendum, Scotland clearly would not qualify then or, indeed, for some time. The budget deficit would have to be substantially less than its present level – estimated by the Scottish government at 8.1 per cent for 2010–11 and 5.0 per cent for 2011–12 (including North Sea revenues – see Chapter 1) and its debt ratio, depending on how it is apportioned with the rest of the UK, will certainly be over 60 per cent; indeed, it would probably be closer to 80 per cent. It could be argued, of course, that most of the existing members of EMU have debt ratios of more than 60 per cent as a result of the financial crisis and recession. But the UK does not belong to the European Monetary System. Scotland therefore would not qualify until it had been a member for two years, had at least shown convincing progress in meeting the deficit and debt criteria, had kept to the normal exchange rate fluctuation margins of the EMS and had proved durability of convergence as reflected in the long-term interest rate on its bonds. No country can be forced or obliged to join the EMS system of managed exchange rates. When Sweden held a referendum, the result was against joining EMU. It has therefore not joined EMS. So long as this position is maintained it will not become a member of the eurozone, although it must have been expected that it would, in due course, do so when it signed its Treaty of Accession. No one can now force it to join against the expressed wish of its people, and to ask it to leave the EU in consequence of this would be absurd. The Czech Republic continues to use its own currency, although Slovakia, from which it separated before joining the EU, has joined the eurozone. As a result of the serious crisis affecting the zone over the last few years, it is probably now less likely that the Czech Republic will join EMU in the foreseeable future, although again it must have been expected to do so when it joined the EU. To my mind the most important point is that the financial crisis, especially the extreme difficulties experienced by the southern European countries and by Ireland, must have changed perceptions considerably, whatever previous expectations and requirements may have been. Not only will this have made countries that are not yet in the eurozone less likely to want to join but, as they wrestle with the problems of the zone and work towards closer financial integration, including a banking union, the countries now in the zone must be less enthusiastic about admitting a newcomer, especially one like Scotland, where its two large banks have so catastrophically had to be rescued. If an independent Scotland were to remain in monetary union with the rest of the UK, in accordance with present SNP government policy, that would in any case preclude Scotland joining EMU, unless the UK did so. That now seems a very distant prospect if it is a prospect at all. For all these reasons, I would not expect existing members to insist on Scotland adopting the euro as its currency if it seeks membership of the EU as a separate state. In the midst of the present major crisis, the long-term outlook for monetary union in Europe is not at all clear. But circumstances can easily change. My own view, as discussed in Chapter 3, is that Scotland, as an independent country, should eventually have its own currency and this could be pegged either to sterling or to the euro, depending on what seemed in the country's best interest. **An EU Banking Union** The UK, along with Sweden and the Czech Republic, has also opted not to participate in the proposed EU banking union, of which all other EU states (not only those in the eurozone) are expected to be members. So that could become an issue too. All of this is still at a very early stage but the participating countries will be required to hand over supervision of their banks to a European Banking Authority under the control of the European Central Bank. This is to be followed by a common means for winding up financial institutions in trouble and a financial backstop for dealing with a banking crisis. These latter arrangements are not yet agreed, but they follow logically from common supervision of the banks. An independent Scotland would have to decide what it should do about this. If it retained sterling as its currency, it could not reasonably participate in the banking union, so long as the rest of the UK did not do so. But, if Scotland had its own currency, there could be a strong case for it participating, even if it did not join the eurozone, as it would give added protection in the event of a major crisis, such as has been experienced in the last few years. **Conclusion on Opt-outs** So the EU countries would be faced with something they have never faced before – a country which has separated from a member state, which does not intend to join the Schengen area or the eurozone, but intends to remain in monetary union with the rest of the member state from which it separated. How would that be regarded? Some other member states, worried about the precedent it set for parts of their own territories that might want to split off and follow the Scottish example, might try to use negotiations over these derogations as an excuse to raise difficulties over Scottish membership. In particular, it now seems that Catalonia is to have an 'advisory' referendum on whether to secede from Spain; there is ongoing concern about a possible split in Belgium between Flanders and Wallonia; and there is also concern in Cyprus that the northern Turkish part might seek EU membership in its own right. It may seem unfair, if Scotland otherwise satisfies the criteria, but these other cases are relevant because they could affect the attitude of other EU governments to Scotland's position. The last French president promised that his country would have a referendum before there was any further enlargement of the EU. It could be argued, if necessary, that this would not apply since Scotland is already in the EU as part of the UK. The Spanish foreign minister has suggested that his country might object because of Spain's concern about Catalonia. So the position is unclear and much skilful negotiation might be needed. Any one of the existing member states could impose a veto. If, however, these concerns can be satisfied and, especially as at present the Scots enjoy European citizenship, it should be possible to extend EU membership to an independent Scotland. There would have to be negotiations, and these would involve the rest of the UK as well, as the secession of Scotland would affect various aspects of its membership, such as voting rights and the number of members of the EU Parliament. But such negotiations might be completed quite speedily, perhaps even within two years from the referendum. Scotland might have to agree to some conditions that it does not particularly like but that would become clear in negotiation. Goodwill would be the key to success. It is a political rather than a legal issue. However, it is conceivable that, if relations between the UK and other Member States were to deteriorate further as a result of the present UK government's desire to renegotiate the terms of membership, with the threat of an In/Out referendum, the atmosphere of negotiations as regards the future position of Scotland might be quite different, and favourable to Scottish membership, particularly if Scotland indicates its willingness to accept all existing rules and commitments. **The EU Rebate** The rebate was negotiated as a result of extreme pressure from Mrs Thatcher, when prime minister, because it appeared that the UK, under the rules that then applied, would contribute a disproportionate share of the revenue for the EU budget and far more than it would get back in payments. At that time, the greatest part of the budget, some 80 per cent, was spent on the Common Agricultural Policy and much of the revenue came from import duties, including on imported agricultural goods. Britain had a relatively small but efficient agriculture that did not qualify for large amounts of support but was a large food importer, especially from the Commonwealth. Since then, however, the EU budget has expanded; agricultural and other natural resource based support, including a small amount on fishing, is still the largest component of its expenditure but, at 48 per cent in 2011, is much less important than it was, and the structural funds for support to areas in need of development have become a second very important area of spending, amounting to 36 per cent of the total budget. On the revenue side, the bulk of the money now comes from a contribution based on gross national income (GNI). In 2012, this is forecast to contribute 74 per cent of total revenue and the proportion has been steadily increasing, probably to ensure that the costs are not too high for the poorer countries. There is also a VAT contribution, which has steadily reduced as the GNI contribution has increased – it now amounts to only 11 per cent of the total. The traditional own resources, principally from import duties, amount to 15 per cent. The contribution of each country is therefore more closely related to ability to pay than it was at the time Britain's rebate was started. There are nine countries that contribute more to the EU budget than they get back in receipts (see table). The remaining 18 countries are net beneficiaries. These are the poorer countries, especially those in the eastern part of the EU – Poland for example is a major beneficiary – but including also Spain, Portugal and Greece. They contribute less, because their incomes are lower, but they are also major recipients, because of the importance to them of agriculture and their need for development. _Net Contributors to the EU Budget in 2011_ _Source: European Commission,_ EU Budget 2011: Financial Report, _Brussels 2012_ Many countries consider the British rebate no longer justified, especially as Britain is one of the wealthiest countries in the EU. In 2005, it was reduced by 25 per cent but, in future budgetary negotiations, it can be expected that there will be pressure from other countries to end it altogether. Indeed, they might use the occasion of a negotiation with Scotland to try to end it not just for Scotland but for the UK. It would, after all, be reasonable at least to adjust it for the remainder of the UK to take account of Scotland's secession. The amount of each country's net contribution varies by a surprising amount from year to year. According to the EU budget, in 2011 the UK rebate was 3.5 billion euros and, as the table shows, the gross contribution, allowing for the rebate, made by the UK was 13.8 billion euros. This was less than that contributed by France – 19.6 billion euros – or Italy – 16.1 billion – countries with similar populations. Germany, with a larger population, makes the largest contribution – 23.1 billion euros. After receipts, the net figure for the UK is 7.3 billion euros – still less than Germany but more than the other countries. However, on a per head basis, the table shows that the UK contribution is now substantially less than that of the Netherlands, Denmark, Sweden, Finland or Germany. Without the rebate of 3.6 billion euros, the contribution, less receipts, in 2011 would be 10.9 billion euros – still slightly less than Germany – and, on a per head basis, without the rebate, this would still be less than Denmark or the Netherlands and only slightly more than Sweden. Scotland would therefore find it very difficult to get a share of the rebate, if it was negotiating to become a member state of the EU in its own right. First, because the other countries would see the negotiations as an opportunity to end the rebate for at least part of what had been the UK. But, secondly, the case for the rebate would be less strong than for other parts of the UK. Scotland has had substantial support from the European Structural Funds for areas scheduled under regional policy. In addition, Scotland has a large agricultural area and much of it is classified as Less Favoured Area under the Common Agricultural Policy (CAP), which qualifies for special assistance; it therefore receives a significant amount of support, though perhaps not as much as it should. The Royal Society of Edinburgh Inquiry that I chaired found that Scotland's receipts from the CAP, especially the amount received for environmental projects, was lower than for other countries and lower than we thought justified, mainly because the UK was anxious to restrain growth in the total EU budget. This could perhaps be subject to negotiation in future. But, in view of what it would qualify to receive and the attitude of other member states, any notion that Scotland might be able to retain a share of the UK rebate must be dismissed. Scotland would be a net contributor because, although there are no GNI figures at present, we know that Scotland would be among the wealthier countries of the EU. But the net contribution per head, even without a rebate, would be very unlikely to be as high as for some other small countries, such as Denmark, Sweden and probably Finland. **Membership of the European Economic Area As an Alternative to the EU** As discussed earlier, if Scotland had problems negotiating membership of the EU, it seems more likely that this would be for political than legal reasons. But, if that were to happen or if the conditions attached to membership proved unacceptable, how serious a setback would this be? If full membership of the EU was blocked, Scotland could apply for membership of the European Free Trade Area (EFTA), which would give it membership of the European Economic Area (EEA). The EEA countries have unrestricted access to the European single market but have to meet the rules of that market, just as EU members do, and also make contributions to the EU budget. Members of the EEA include Norway, Liechtenstein and Iceland. Switzerland is not a member of the EEA but has bilateral free trade arrangements with the EU. The main difference between being a full member of the EU and being in the EEA is that EEA countries have no representatives on the EU Commission or Council and no members of the European Parliament; they therefore have no influence on the policies of the EU, although they are subject to them and to all the single market rules, if they are to get unrestricted access to the market. This disadvantage has been a major factor in encouraging countries such as Sweden, Finland and Austria, which were previously members of EFTA, to become full EU members. A major difference between the EU and the EEA countries is that the Common Agricultural and Fisheries Policies do not apply to the EEA. This has been a significant factor in the decision of the Norwegian public to reject EU membership twice in referenda. Fishing is an important industry for Norway, which, although sharing part of the North Sea with EU states, also has its own continental shelf. Its agriculture faces major handicaps of climate and topography and is therefore more highly protected than it would be under the CAP. These considerations, especially for fishing, are also relevant to Iceland. Exclusion from the CAP would mean that Scotland would have to finance all of its agricultural support itself, and such support would certainly be needed, especially in the hill areas and the islands, if agriculture was to remain viable there. Exclusion from the Common Fisheries Policy (CFP) might seem more attractive to many people, as it is commonly asserted that the policy has been too centralised, generally unsatisfactory for Scotland and has led to depletion of fish stocks. But the main problem with fisheries policy is that the efficiency of modern fishing boats has steadily increased to the point at which their ability to catch fish greatly outstrips the supply of fish in the sea. This is what has caused stocks to become seriously depleted. In an attempt to restrain overfishing, the EU has imposed catch quotas. But, in each year's negotiations at Brussels, Ministers (including Scottish Ministers) are pressed by their country's fishing industries, concerned understandably about their livelihoods, to get the best deal they can. This has meant that the quotas are usually higher than the scientific advice recommends. Because many people in Scotland regard the CFP as a wasteful failure, there are those who would like to see responsibility repatriated. It is sometimes suggested that this should be one of the subjects in the Prime Minister's proposed renegotiation of the UK's relationship with the EU. The policy certainly needs to become less centralised and some moves to achieve that have already been implemented. It has recently been announced that the practice of discards, whereby substantial quantities of fish are discarded at sea because they are over the allowed quota, is to end. This is very welcome and long overdue. The main difficulty in getting reform is that not all of the fishing nations agree on what should be done. The North Sea, by its nature and geography, is a shared resource between all of the countries with a coastline bordering it. That is how the CFP operates and each country has a quota. If there were not a common policy, major problems would arise if each country tried to operate its own exclusive zone. Fish do not respect international boundaries, so that, to avoid overfishing, some means of limiting the amount of fish caught would have to be applied to all countries sharing that resource. Negotiations to achieve this would be complex and difficult and it is by no means clear that any resulting policy would serve Scotland better. **What if the UK Leaves the EU?** Overhanging any discussion of Scotland's future in the EU is, of course, uncertainty about the position of the whole UK. Prime Minister David Cameron has said that, if re-elected, he would intend to renegotiate the terms of UK membership during the next UK Parliament and then put the issue to the country in an in/out referendum on Britain's membership. Judging by the attitude of many Conservative backbenchers and some of the polls, the whole country could be heading towards the exit. The UK's constant ambivalence in its attitude to the EU is trying the patience of other member states and they might not now do much to resist the UK's departure. This does not bode well for a good outcome in a renegotiation. There can be no doubt, however, that the way in which the EU is operating at present is unsatisfactory. Much of this is the result of extending institutions that were originally designed for six member countries to the enlarged EU of 27. Decisions take too long and are often very difficult, as has been very obvious during the eurozone crisis. Much of the irritation with the EU in Britain, however, stems from the rules for the single market, which seem to many people excessively bureaucratic; this is ironic because the single market has been strongly supported by British governments of both parties. Indeed, the 1992 Programme as part of the European Single Act was the brainchild of a British EU Commissioner, Lord Cockfield. It requires rules on standards, which can often be complicated, if member countries are to freely admit each other's goods. Otherwise non-tariff barriers, such as differing safety standards, could be a major obstacle to trade. But the irritation, if understandable, is increased by much of the Euro-sceptic press. It has gained strength in England and can be seen as a form of English nationalism, since it also has a great deal to do with concern over issues of sovereignty. Whether it is true, as is commonly asserted, that Scotland is less Euro-sceptic than England is not really clear. Some polls have given results that appear to support this. Because of our history, there probably is less concern about sovereignty in Scotland, which lost much of its sovereignty in 1707, than in England, where it is a relatively new concept. But whether this would result in Scotland voting one way in an in/out referendum and England the other is far from certain. I suspect that, if such a referendum is eventually held, the disadvantages of leaving the EU and losing influence over so many policies that affect us would become much clearer and might well result in a decision by the UK to continue membership. The EU will inevitably change, however, if the members of the eurozone proceed down the path of much closer fiscal and political integration, as seems at present to be the intention. Whether formally recognised or not, that would lead to a two-tier EU, with the members of the eurozone forming an inner core and the other countries, of which the UK is likely to be one, in an outer free-trading periphery. Some of the countries not presently in the euro might aspire to join but equally some of those at present using the single currency might, in the end, decide that the degree of integration envisaged involves a greater loss of sovereignty than they can accept and leave to join the periphery. Recognition of a proper two-tier EU is perhaps inevitable in the end and might make it easier for it to operate effectively, as suggested in some of the proposals for reform. If Scotland became independent and was accepted as a member of the EU, I would expect it to remain outside the eurozone, for all the reasons already discussed, so that it would be in the outer tier, if there were one. It would not be sensible to try to become a member of the eurozone, so long as it was unclear whether inflation in Scotland could be kept at a rate that was compatible with other countries, especially Germany. Failure to do this would result in the economy becoming increasingly uncompetitive and it is because this has happened in several of its members that the eurozone is in its present difficulties. It must also be doubtful if Scotland, having become independent, would want to surrender the amount of sovereignty that would be expected of a member of the eurozone. The really interesting question would be what Scotland's attitude might be if the rest of the UK voted in a referendum to leave the EU but Scotland's population voted to stay in. If there is a UK referendum, it will not be before the next UK Parliament and therefore after the Scottish referendum. Scotland's decision will therefore have been made. But, if, in advance of the Scottish referendum, it seemed likely that the UK was going to leave the EU, that could become a factor in the referendum debate. Those who regard continued EU membership as important and in the best interests of Scotland and its economy might then be more likely to favour independence. #### 5 #### **Could an Independent Scotland Have Handled the Failure of the Banks?** The bank crisis of 2008 has done immense damage to the British economy and the consequences have been long lasting. At the time of writing, we are still struggling with its effects and it looks as if it will be a long time yet before a satisfactory rate of growth returns to the economy and the legacy of debt is overcome. It was particularly distressing to those of us living in Scotland that it was Royal Bank of Scotland and Halifax Bank of Scotland (HBOS) that were in the worst state and had to be bailed out by government. Royal Bank is a Scottish bank with headquarters in Edinburgh. It was founded in the 18th century partly, it is said, because of concern that Bank of Scotland was too sympathetic to the Jacobite cause. Bank of Scotland dated from 1695 – before the Act of Union – and had a very proud history. It was the oldest bank in Britain, apart from the Bank of England, which was one year older. As an independent bank, it had enjoyed a reputation for good cautious management and strong growth but had merged with Halifax, the largest of the former building societies, to form HBOS in 2001. Although technically a merger, rather than a takeover, Halifax was by far the larger partner and most of the combined management team including the Chief Executive, James Crosby, and his successor, Andy Hornby, were from Halifax. Bank of Scotland had secured agreement, however, that the headquarters of the combined bank would be at The Mound in Edinburgh. What happened at the Royal Bank would seem to be mainly due to megalomania on the part of the chief executive, Fred Goodwin, who dominated the management team. The directors too must share some of the blame. The bank had mounted a successful takeover of the much larger English bank, NatWest, outgunning a rival bid from Bank of Scotland, which had started the process. The takeover of NatWest had been a success and had been well conducted so that it yielded considerable savings. It had also extended its business worldwide, with takeovers in other countries, and built a huge new head office at Gogar on the outskirts of Edinburgh. It should have stopped there. It had quite enough to digest and, had it done so, it might not have had to be rescued or at least it would not have required the amount of help that was eventually necessary. But, in October 2007, with a recession in the offing, together with Santander and Fortis, it launched a joint bid for the Dutch bank ABN AMRO, the largest ever bank takeover. This was a disaster both for Royal Bank and for Fortis. ABN AMRO proved to be toxic with many bad loans, which would never be repaid; and a key component, desired by the Royal Bank, was removed from the transaction before the sale. Coupled with the onset of the recession and the problems of excessive lending that affected almost all banks, Royal Bank had engaged in unduly aggressive and high-risk organic growth. The chief executive had pressed for balance sheet growth and the commercial, corporate and investment banking teams were given incentives to deliver this. This pressure for growth and incentives to achieve it resulted in some bad deals rather than careful assessment of risk. All of this brought the bank to the point of insolvency so that it had to be rescued with a £40 billion injection of taxpayers' money from government. So Royal Bank, which had briefly been the largest bank in the world, with outstanding loans exceeding, by 40 per cent, not just the Scottish but the UK GDP, became semi-nationalised. The government injected capital by taking a 60 per cent shareholding, which later had to be raised to 80 per cent, when more funds were required. Bank of Scotland's merger with Halifax in 2001 came about for two reasons, which are well explained in Ray Perman's recent book. The Bank could claim to have had the most successful record of all British banks, both in terms of growth and return on shares. But, following financial globalisation and the free-for-all climate that was the product of the government's 'Big Bang' in the 1980s, which ended the previous regulatory restrictions, it felt vulnerable to takeover because of its relatively small size. Maybe it was less of a takeover target than it imagined, as it would have been hard for any other management to run it better or to generate substantial savings. But it felt that to survive it had to grow. The problem was that, although it could increase the size of its loan book, it could not get its deposit base to grow sufficiently to support it without having to rely on wholesale finance from elsewhere in the sector. Originally banks had based all of their lending on their deposits. And, had they continued to do that, there would have been no banking crisis. But the Big Bang in Britain and the repeal of the Glass-Steagall Act in the United States had meant that retail banks could move into investment activities that had hitherto been the province of merchant banks in Britain or investment banks in the United States. This was not a feature of Bank of Scotland or HBOS. It was a much riskier activity but potentially also much more profitable than retail banking and it made takeovers in the banking sector much easier and more likely – hence what Bank of Scotland management saw as a threat. To help it to grow, without being too dependent on wholesale finance, Bank of Scotland first attempted a merger with the former building society, Abbey National. This proved abortive when it was impossible to agree terms. The irony is that had the Bank only waited, it could have taken over Abbey National with a hostile bid on its own terms, as it was not long before the latter got into trouble. Instead, Abbey was taken over by Santander. However, having failed in its bid for NatWest and also in its attempted merger with Abbey National, the Bank felt that it had become exposed and that the chances that it would itself be the subject of a takeover had increased. Accordingly, when the possibility of a merger with Halifax arose, it seemed attractive. Halifax had demutualised to become a bank. It had a huge mortgage book, with about a third of the total mortgages in the UK, and this seemed to Bank of Scotland to offer a deposit base on which profitable lending could be increased. Looking back with the benefit of hindsight at what happened to the world's banking sector in the period before 2008, some of what was done then seems hard to credit. How could institutions, whose credibility was based on their sound judgement in handling people's money, have got into such a mess? But, as with previous financial bubbles, few people saw it coming or had any understanding of the danger they were in. The securitisation of mortgages, whereby lenders, instead of keeping outstanding mortgages on their books as assets, had been able to divide them, package them and sell them on the world market as bonds, had made possible a huge housing boom. There was, however, a fatal flaw – once a mortgage had been securitised and sold, the original lender no longer needed to be concerned about the borrower's ability to meet the interest cost or eventually to repay the loan. This is what probably led mortgage companies to extend their lending to people in the United States who would never be able to pay – the so-called 'subprime' mortgages. In the UK too, banks – the extreme example being Northern Rock – and also some building societies began to lend recklessly, sometimes agreeing to a loan of a value even higher than that of the underlying asset, let alone what that asset – a house or a flat – might be worth in a falling market. Banks also began to accept self-certification of income from mortgage applicants. People with self-certified mortgages that had been inadequately checked or with loans with a value that was higher than that of the asset on which they were based, would be in serious trouble if they became unemployed or if interest rates rose. The original lender became even more removed from worry about ability to pay interest or redeem their loans when these securities were further sliced, diced and repackaged to form 'collateralised debt obligations' (CDOs). CDOs could themselves be further subdivided and repackaged into what were called 'CDOs squared'. By the time all this repackaging and subdividing had taken place, the risk on such bonds was widely spread. This was hailed by many people as an advantage, since any risk was well diluted and spread. But the fact was that few people understood the nature of these securities and no one dealing in them was able to assess the degree of risk they contained. Indeed, Robert Peston, in the excellent book _How Do We Fix This Mess?_ , says that to assess the risk involved in all the mortgages that were contained in a CDO squared would take a diligent person seven years of constant reading if one assumes that there were 150 mortgages in every CDO involving a prospectus of 200 pages and 300 pages in every prospectus for a CDO squared. Even the ratings agencies – Moody's, Fitch and Standard & Poor's – could not do this. They seem to have been taken in by those who claimed that risk was negligible because it was so widely spread. Amazingly, they gave triple-A ratings, the highest available, to these securities. The banks used these securitised assets to augment their deposits as a base for their lending and, because they were so highly rated, they did not have to be covered with a substantial amount of capital, when it came to satisfying the international requirements for capital adequacy set out in the Basel agreements at the Bank for International Settlements. This is what led to disaster. It never seems to have occurred to those responsible that, at some point, these assets might not be able to be sold or might be sold only at a heavily depreciated price. Once the markets realised that many of these securitised assets were toxic, their value plummeted and they became impossible to sell. While Bank of Scotland had seen attractions in the merger with Halifax, because it offered the prospect of increased lending using the large Halifax deposits as a base, the new management of the merged bank went much further. The emphasis was not on assessing risk but on selling more and more loans. It sold their mortgages as securitised assets and extended lending far beyond the deposit base. Lending was very profitable and so it was encouraged, not only to expand what was already the largest mortgage book of any of the UK lenders, but especially in the commercial loan sector where Peter Cummings was in charge. He was a Bank of Scotland rather than a Halifax man but, encouraged by senior management, he excelled in increasing commercial lending. The inevitable result was that, when the wholesale market dried up, HBOS became spectacularly insolvent and had to be rescued in a rushed merger with Lloyds TSB in which the government took a 43 per cent shareholding. When, in 2009, it became apparent that the HBOS losses were much larger than previously thought, the government announced that it would increase its stake in Lloyds Banking Group to 65 per cent. After the crash, Cummings was severely criticised by the Financial Services Authority (FSA), disqualified from working in banking and made to pay a fine of £500,000. He would argue that it was successive chief executives, James Crosby and Andy Hornby, both of whom were from Halifax, who pressed him to increase lending by so much. It may be a mistake to lay all the blame on Cummings because many would argue that he would not have been able to do what he did if the old Bank of Scotland management had been in control. So what might the government of an independent Scotland have done about this? There are those who will argue that a Scottish government and Scottish regulator would never have allowed the Scottish banks to get into such an overextended state. Well, maybe. But the Scottish First Minister was encouraging Fred Goodwin, chief executive of Royal Bank, a bank in which he himself had once worked, right up to the last. And being independent and having their own regulator did not stop Ireland, Iceland or Spain from getting into a similar mess. So a Scottish regulator would have had to be endowed with better foresight than any of the counterparts in these other countries. The fact is that almost no one foresaw the crisis developing in the way that it did. Assuming that the RBS takeover of NatWest and Bank of Scotland's merger with Halifax had taken place, the crisis would obviously have required intensive discussions between an independent Scottish government and the government of the remainder of the UK to agree a joint plan. The rescue of Fortis in the three Benelux countries is sometimes quoted as an example of the sort of rescue that would have had to take place. Fortis was a conglomerate – its banking operations included both commercial and investment banking but it also had a substantial insurance business. It was split up after the catastrophic attempt to take over ABN AMRO. The Dutch government nationalised the banking and insurance subsidiaries in the Netherlands. The Dutch banking business was renamed ABN AMRO and the insurance business was split off as ASR Nederland. The Belgian government eventually sold much of the remainder of the banking business to the French bank BNP Paribas. The rest of the insurance business, which was substantial in Belgium, remained with Fortis but its name was changed to Ageas. Discussions between the governments were far from easy, involving a good deal of argument to get a fair division of the assets, and this led to several attempts by shareholders to take legal action before the deal was finally settled. A Scottish government would have had to be involved in similar discussions with the government of the remainder of the UK and there would be room for much disagreement, especially over the division of assets abroad. Probably the Scottish parts of HBOS (the old Bank of Scotland) and Royal Bank would have had to be nationalised at considerable cost, leaving the government of the rest of the UK to deal, in whatever way it chose to, with the parts of the two banks in England and Wales. International subsidiaries would probably have been put up for sale. Much would obviously have depended on whether an independent Scotland was still using sterling, as the SNP government have stated to be their policy, and, if so, whether the Bank of England remained lender of last resort and the UK Financial Services Authority or the Bank of England was still responsible for regulating banks in Scotland. Since all of this is unknown, it is not very helpful to speculate. However there are some points that can be made. If there was a separate Scottish bank regulator – and that would seem to be a requirement for a member state of the EU – responsibility for the activities of the Scottish banks and indemnification of any losses made by depositors would be in accordance with whatever Scottish bank insurance scheme was operated and ultimately with the Scottish government. But, if the operations of Scottish banks in England and Wales were carried out through subsidiary companies rather than merely branches, these subsidiaries would have to be subject to the regulator for the rest of the UK. Compensation of depositors in these subsidiaries would then be the responsibility of the insurance scheme operated in the rest of the UK and would ultimately lie with the UK government. This became an issue with the collapse of the Icelandic banks, Glitnir, Landsbanki and Kaupthing, which had expanded recklessly during the boom years to the point where their combined debt exceeded by six times the annual output of Iceland's economy. The depositors' insurance scheme for Iceland's banks gave protection up to 20,000 euros but it was inadequately funded to meet the costs of compensation when the banks collapsed. Icelandic depositors were compensated but those with deposits in Icesave, an online branch of Landsbanki operating outside Iceland, were not. The British and Dutch governments decided to fully protect retail depositors with accounts in Icesave and then tried to reclaim from Iceland the cost that the Icelandic insurance scheme should have covered. This amounted to 3.9 billion euros which was close to 50 per cent of Iceland's reduced GDP. To impose such a burden on Icelandic taxpayers, especially when the Icelandic population was less than that of the city of Edinburgh, was clearly unaffordable. A deal was proposed which involved payment over 15 years at an interest rate of 5.5 per cent. This was rejected by Iceland's population in a referendum. One can imagine that there would be a similar reaction from the Scottish population if they had been asked to meet the costs of compensating English and Welsh deposit holders of Scottish banks operating in the rest of the UK after they had recklessly expanded. Iceland has been accused, before the EFTA Surveillance Authority, of not meeting the requirements of a European Economic Area directive aimed at ensuring that bank deposits were properly covered by insurance; it has also been accused of discrimination because Icelandic depositors were compensated but those abroad were not. However another Icelandic bank, Kaupthing, owned the UK bank Singer and Friedlander as a subsidiary company. No claim could be made on Icelandic taxpayers for compensation in this case since it was regulated by the UK authorities and any compensation required would be a charge met by the UK insurance scheme. What would have happened had Scotland been independent at the time of the banking crisis would therefore have depended on how the banks were organised and regulated at the time. Had NatWest and Halifax been set up as subsidiary companies for which the regulating authority was for the rest of the UK, there would have been a charge on the UK bank insurance scheme and no liability on the Scottish scheme or on Scottish taxpayers. But, had there been branches in England of the Scottish banks, the same liability would have arisen as affected Iceland. The Scottish insurance scheme would have had to pay and, if that was inadequate, there would have been recourse to the Scottish government and ultimately to Scottish taxpayers. But compensating depositors of failed banks is not the only issue. As what has happened in Ireland demonstrates, the banks have other major obligations and costs. If a bank fails, the shareholders stand to lose their money, but what about those who hold bank debt in the form of bonds? In the global financial market, these bonds are held very widely across the world by other banks or by institutions such as pension funds. The Irish government guaranteed all these liabilities and it is said they were pressed to do so by the European Commission, because of the knock-on effects on banks and institutions such as pension funds all across Europe if they did not do so. But it proved a major mistake. Bonds, after all, should not be regarded as risk free – that is why the interest on bank and company bonds is higher than on gilt-edged securities. To have simply allowed the banks to go bankrupt would have caused a lot of people to lose money but it would probably have been preferable for Ireland. The costs of meeting the banks' liabilities were much higher than the Irish government had estimated and overwhelmed Irish state finances. Although Ireland's government had been in a strong financial position with a surplus on its budget before the crisis and its debt in relation to GDP one of the lowest in Europe, the guarantee of the banks' liabilities imposed such a heavy burden that the government had to seek a bailout from the IMF, the European Central Bank and the European Union. I believe that, had Scotland been an independent state in 2008, it would not have been able to cope with the losses incurred by its banks, whatever arrangements had been put in place, and, even if the banks and the Scottish authorities had had the foresight to ensure that operations outside Scotland were conducted by subsidiaries regulated in those countries, they would have had to face the same problems as Ireland. Indeed, none of the Irish banks were on the scale of Royal Bank or HBOS. If this had happened – and I think it would have – the Scottish government's finances would have been overwhelmed and, like Ireland, it would have had to seek a bailout from international organisations. But what is past is past. So what are the lessons to be learnt for the future? Clearly other countries have to learn lessons too. One obvious lesson is that it is dangerous for banks to grow so large in a small country that they cannot be supported if they fail. Another is that normal retail banking should not be put at risk by investment activities that, however profitable, could jeopardise the viability of the bank. The UK government set up the Vickers Commission to recommend action that needs to be taken. It has recommended increased levels of capital to support lending so that banks have a larger cushion against insolvency. This is clearly important and welcome. The commission also recommended ring-fencing the investment activities of banks to keep them separate from retail banking. Others, including the Parliamentary Commission on Banking Standards, chaired by Andrew Tyrie, have questioned whether this goes far enough. It appears that they would like to see the ring-fence 'electrified' and the legislation contain a reserve power for complete separation. The intention is that investment or 'casino' banking should be sufficiently separate so that it could not put ordinary retail banking at risk and, if necessary, be allowed to fail in a crisis. A large combined bank cannot be allowed to fail because of the dire consequences for the whole economy. In that situation, investment bankers get huge bonuses if they do well but, if they do badly, there is an unwritten guarantee from government that they will not be allowed to fail. My own view coincides with that of the Parliamentary Commission – that the Vickers' recommendation, despite being unpopular with the banks, does not go far enough. Scotland has a substantial financial sector, employing a lot of people. Its banks were one of the features that made Scotland distinctive and they have been a source of much pride in the past. That is why the failure of the two large banks was so keenly felt. The first lesson must be that there are grave dangers in having banks that have so clearly out-grown the size of the economy, as was evident in Iceland. Many people take the view that Britain's banking sector, even against the size of the UK economy, is too large to the point at which it poses a risk. So, if Scotland becomes independent, it should aim at having a banking sector that would not overwhelm the economy if things go wrong. There also needs to be much tighter control of credit. As Robert Peston shows, after the banks started using mortgage-backed securities to increase their lending, they reached a point where there was a shortage of borrowers – banks were more or less throwing loans at people, accepting self-certification for mortgages, as well as offering mortgages at very high loan-to-value ratios. Credit was rising at a much faster rate than the growth of the economy whereas, before Big Bang, it had risen at approximately the same rate. So long as it lasted, lending was highly profitable. Now the situation is in reverse as the banks try to rebuild their balance sheets. An aspect of this that has not yet been addressed but which, in my view, is very important is the connection between lending and housing policy. It is no accident that, in the UK, Ireland and Spain, all countries that got into serious difficulty, the proportion of home ownership in the housing stock was extremely high. It is now about 70 per cent in the UK and, although it used to be much lower in Scotland, the gap has almost disappeared with owner occupation about 66 per cent of the total stock. In Germany, on the other hand, less than half of the housing stock is in owner-occupation and it is even lower in Switzerland. In France it is not much over 50 per cent. In Britain, since the 1980s, home ownership has been strongly encouraged by government and Right to Buy on heavily discounted terms has resulted in tenants buying much of the local authority stock. There has also been a remarkable housing boom, only made possible by the great expansion of bank lending. This long-lasting boom led to a tripling of house prices in the decade and a half up to the peak in 2007. People came to expect house prices always to rise and to rise faster than inflation or earnings. It had not always been so. At the end of the 1980s, there was a sharp drop in house prices for several years. But that was forgotten and people came to think that a house was an investment on which one could not lose and that it was worth taking out a huge mortgage to provide funds for other things, such as an expensive holiday or luxuries of various kinds, because, with house prices going up, there would be never be any difficulty in paying it off. Obviously people aspire to own their homes. It is the preferred form of tenure for most people. But it is dangerous if people are encouraged to take on the burdens of ownership when they cannot really afford it or will be unable to afford it if interest rates rise, as inevitably they do from time to time. Despite interest rates being very low at present, the newspapers are reporting that a high proportion of owners are in difficulty with their mortgages. Some are paying interest only, having stopped the element of repayment. Others have difficulty even with the interest. The threat of possible repossession causes a lot of pain and anguish. Other countries have a much larger, properly regulated rented sector – some of it social rented from housing associations and some of it privately rented. The former is at below commercial rents with an element of subsidy, the latter is market determined. In both cases, the landlord takes responsibility for maintenance, which can involve major unexpected costs that an owner-occupier in financial difficulty could find it hard to afford. For those on low incomes whether in work or unemployed, those who are disabled and have little or no income and people who fall on really hard times, Housing Benefit is available, along with other benefits. It is a feature of any modern society that there is a section of the population that cannot afford homes of a standard that society considers acceptable. In the old days, that led to slums and, to get rid of slums, we built council housing. The quality of some council housing, although better, also left a lot to be desired. The experience of Europe, especially in Scandinavia but also now in Scotland, seems to show that social housing can be best provided through housing associations. But there also needs to be a strong private rented sector of good standard for people who may have to move often or who do not yet feel able financially to take on the responsibilities of ownership. Much of the excessive lending by banks has been connected with housing debt. It is no kindness to encourage people who cannot afford it into home ownership. It causes hardship for the people concerned and ultimately, when there are defaults, serious damage to the lenders, with consequences for the whole economy. So an independent Scotland would have to think hard about its banks: what size the financial sector should be; how it should be regulated; whether that requires a Scottish regulator and a Scottish lender of last resort; and how, if it has activities in other countries, they should be structured so as not to cause an insupportable burden, should they fail. I see no evidence that the Scottish government has done much of this thinking so far. But the connection with housing policy is important, because it has accounted for so much of the excessive lending. Any thinking on the future of banking in Scotland therefore needs to be coupled with a reappraisal of housing policy to consider how the population can have a good quality of housing without the pain of being stretched beyond what individuals can afford and of how this can be provided without the financial sector taking risks in lending that put the prosperity of the country at risk. #### 6 #### **Scotland's Energy Future** Under the Scotland Act of 1998, which established devolution, energy policy is a reserved matter for the UK government but the Scottish government has responsibility for planning decisions. This means that it could refuse permission for new developments, such as a nuclear power station, and that planning decisions for developments such as wind farms, though initially for local government, rest ultimately with Scottish Ministers. They also have responsibility for an agreed proportion of electricity to be generated from renewable sources under the Renewables Obligation (Scotland). This division of responsibility may appear unsatisfactory and is probably not understood by many people but appears to have worked quite well. Certainly the Scottish government has developed its own view on energy policy and, as planning decisions are involved in most aspects of development, such as wind farms, it is right for them to do so. In the summer of 2011, there was a debate in Edinburgh organised by _The Spectator_. The subject was 'Scotland's Energy Policy is just Hot Air'. There were speakers for and against this motion but a large majority at the end of the debate supported the motion. There were many who criticised the Scottish government's policy but little was said about the responsibility of UK Ministers. What was particularly evident in the debate was concern – and, indeed, hostility – about the effect of the widespread development of wind farms on Scotland's landscape. It also seemed that there was a good deal of scepticism about climate change. My own view on climate change is quite simple. I am not a climate scientist but I respect the overwhelming view of those experts who study the subject. This is that climate change is real and that, although there have been major changes in climate over a long period, the only plausible factor that can account for the changes that have been observed over the last century or so is the increased release of greenhouse gases caused by human activity. Apart from changes in temperature, it is also predicted that there will be more extreme weather events and certainly experience of recent years seems to bear that out. In a major report published in 2006, the Scottish Environment Protection Agency (SEPA) found that, since 1961, there had been significant changes in Scotland's weather – Scotland had become much wetter. Although there have been large variations from year to year, there had been an increase in average winter precipitation of 60 per cent in the north and west and an average annual increase for the whole country of 20 per cent. Some parts of north-west Scotland had become up to 45 per cent drier in summer. The average period of snow cover had decreased over 40 years as a result of milder autumn and spring temperatures. The sea level around Scotland had risen and the seas had warmed by 1 degree Celsius over the last 20 years, causing changes in the abundance and distribution of marine species. A successor to this report has not been published but the findings of other studies confirm the changes that are taking place and SEPA has published its own Climate Change Plan. There are still quite a number of climate change deniers. But, whether one accepts the findings or not – and I do – the issue is of such importance and the consequences potentially so serious that it would be foolish to do nothing. Lord Nicholas Stern in his important review for the government argued that, while policies to try to halt climate change were expensive, the consequences of doing nothing could be catastrophic for the world and would cost a great deal more. I therefore welcome the Scottish government's targets for replacing dependence on fossil fuels with renewable forms of energy. These targets have been revised and are ambitious. At a conference in Glasgow in October 2012, the First Minister said that it was now intended to have 50 per cent of Scotland's electricity demand supplied from renewables by 2015. He felt it was possible to achieve this because Scotland had met 35 per cent of demand from renewables in 2011, as against a target for that year of 31 per cent. The aim is now to meet the equivalent of 100 per cent of Scotland's electricity demand from renewable sources by 2020, and the Scottish parliament has passed legislation requiring greenhouse gas emissions to be cut by 42 per cent by the same date. Scotland is well endowed with a wide variety of energy sources. Coal production provided the energy for the industrial revolution in the 18th and 19th centuries. At its peak, the output of coal was over 20 million tonnes a year but it is the most carbon-emitting form of energy and all of the deep mines in Scotland are now closed. However, a considerable amount of coal – some 6 million tonnes a year – is still produced from opencast workings and is used in Scottish power stations. North Sea oil and gas are both now past their peak production and are expected to continue to decline but the output is still substantial, as we will see in Chapter 7. Imports to the UK from other sources are now growing and, in the case of gas, now account for about half of the supplies in the UK; but the output of both from the North Sea still greatly exceeds Scotland's requirements and will remain significant for at least another generation. The Royal Society of Edinburgh in its major _Inquiry into Energy Issues for Scotland_ found that 34 per cent of total energy was required for domestic use, some 28 per cent for transport, 21 per cent for industry and 16 per cent for services. **Electricity Generation** About a quarter of energy used in Scotland is required to make electricity but, as Table 1 shows, a substantial 23 per cent is exported through the interconnector to England and a further 3.5 per cent to Northern Ireland. So Scotland is a major exporter of electricity with more than a quarter of its output going to other parts of the UK. In 2011, 33 per cent of Scotland's electricity was generated by nuclear power, 21 per cent by coal, 15.7 per cent by gas, 27 per cent by renewables (hydro 10.4 per cent and other renewables 16.4 per cent) and 2.3 per cent by oil. Most of the electricity produced is now generated in four large power stations with a combined capacity of 7,229 megawatts (MW): one coal-fired – Longannet; two nuclear – Hunterston B and Torness; and one gas-fired at Peterhead (see Table 2). Most of these are now quite old, the most recent being Torness, which was commissioned in 1988. Cockenzie, a coal-fired power station which had a capacity of 1,200 MW, no longer met EU emission standards and closed in March 2013, but it may get a new lease of life if Scottish Power's plans for a new gas-fired plant on the site are implemented. The largest plant, Longannet, is more than 30 years old but it has been upgraded with flue gas desulphurisation scrubbers to reduce emissions of SO2 and NOx, the gasses responsible for acid rain; it has also been adapted to enable gas to replace 20 per cent of the coal and will burn environmental waste composed of heat-treated dried sewage sludge. Hunterston B has just had its life extended to 2023 while Torness can also be expected to have its life extended in due course. Peterhead, which was repowered with increased capacity in 2000, should have considerable life left too. So it is likely that, one way or another and in contrast to what was expected till recently, all five plants (if a rebuilt Cockenzie goes ahead) could remain operational for many years yet. **Table 1** _Scottish Electricity Generation and Use 2011_ | | **GWH** | **percentage** ---|---|---|--- Coal | | 10,779 | 21.0 Gas | | 8,052 | 15.7 Oil | | 1,156 | 2.3 Nuclear | | 16,892 | 33.0 Hydro | | 5,936 | 10.4 Wind, wave and solar | | 6,992 | 13.7 Other renewables | | 1,404 | 2.7 Waste | | 12 | 1.2 **Gross Total Supply** | | **51,223** | **100** | | | Pumped storage and own use by major generators | | -2,924 | 5.7 Own use by other generators | | -353 | 0.6 Transmission and Distribution losses | | -2,444 | 4.8 **Net total supply** | | **45,502** | **88.8** | | | Exports to England | | -11,597 | 22.6 Exports to Ireland | | -1,769 | 3.5 | | | Supplied to Scottish consumers | | 32,136 | 67.5 _Source: Department of Energy and Climate Change,_ Energy Trends _, December 2012_ **Table 2** _Electricity Generating Capacity (Main Producers) 2011_ | **Capacity in MW** ---|--- _Coal_ | Longannet | 2,400 Cockenzie | 1,200 | _Gas_ | Peterhead | 2,177 | _Nuclear_ | Hunterston B | 1,288 Torness | 1,364 _Hydro_ | Natural flow | 1,489 Pumped storage | 720 | Wind and Wave | 3,016 Other renewables | 305 Total Renewables | 4,810 Note: Figures are for maximum capacity which will be greater than capacity available at any one time. _Source: Department of Energy and Climate Change, Energy Trends, December 2012_ In 2011, there was 5,685 MW of capacity in renewable power stations, an increase of 1,042 MW or 22 per cent on the previous year. But this capacity is not available all the time – it depends on the amount of water for hydro and wind for wind farms. The hydro stations, since they were first developed, have been a major economic benefit to Scotland. Unlike the renewables now being developed, they have not required subsidy. They are individually much smaller than the five large fossil-fired and nuclear stations, ranging from just a few megawatts to over 100 MW. Combined, they have a considerable capacity but many of them have insufficient water resource to run their turbines all the time and are designed to store up the water to use it to run their turbines to meet peak demand. For this, they are extremely valuable, particularly in complementing the nuclear stations, which are base-load stations that cannot readily be turned off and on. The fossil-fired stations are more flexible than nuclear, especially Peterhead which burns gas, but they still take a considerable time to start up or to close down. Demand for electricity is highly variable, depending on time of day, time of year and temperatures. Moreover, a key feature of electricity is that it cannot easily be stored, so that there has to be capacity to meet the highest expected peak demand. The flexibility of the hydro stations is therefore of great benefit to the whole system. To the natural flow hydro stations, most of which date from the 1940s and 1950s, has recently been added the large Glendoe plant with a capacity of 100 MW. In addition to these are the two pumped storage schemes at Cruachan and Foyers with a combined output of 700 MW and an ability to store the equivalent of 1,510 MW in the form of their water resource. These do not add to the total supply but use off-peak electricity to replenish their reservoirs so that they are available to generate at full power during peak periods. Scottish and Southern Energy have recently announced plans for three more pumped storage schemes, including two large ones in the Great Glen at Loch Lochy and Invermoriston. Together, these will add a further 900 MW of capacity. They also intend to convert the existing Loch Sloy scheme at Loch Lomond to provide a further 60 MW of pumped storage capacity. No power station has a 100 per cent load factor. There have been major and sometimes prolonged outages at the nuclear plants and even the fossil-fired stations have to have their boilers shut down for maintenance. But, in the renewable sector, the load factor is much lower. The Scottish hydro stations have a load factor of about 45 per cent simply because there is not sufficient water in the reservoirs to run the turbines all the time. But this is manageable. Installed capacity is deliberately more than can be run full time. Its output can therefore be planned and, although rainfall varies, there is enough in Scotland, especially in winter, to generate as much power as is required. Wind power is much less predictable. The load factor for onshore wind in Scotland in 2011 was 27.4 per cent, only marginally better than in England. For offshore wind, it was higher, 35.8 per cent but, as yet, there are relatively few offshore wind farms. Very often, when the weather is coldest in winter, an anticyclone over the country means there is very little wind. This, coupled with the major impact that wind farms have on the landscape and the subsidy that is still required, has led a lot of people to question the value of investment in wind energy. It means too that other forms of energy have to be available as backup; that requires investment in plant which may be only intermittently required. The subsidy for wind power is provided through Renewables Obligation Certificates (ROCs) and the Climate Change Levy. These require the power companies to meet targets for the generation of renewable energy but, inevitably, that puts up the cost of the electricity supply that has to be met by consumers. Since Scotland has much more wind power than either England or Wales, it follows that consumers in England and Wales are meeting part of the subsidy for wind energy in Scotland. This is an important issue for the independence debate. A Scottish government might find that English and Welsh consumers were unwilling to subsidise Scottish wind power electricity if Scotland was a separate country. This, however, might depend on whether the rest of the UK was able to meet its renewable energy targets by some other means or, despite the requirements of clean energy, decided to ignore them. The response to this is usually that England and Wales will still want electricity exports from Scotland. But, if their own supplies of electricity cannot cover their needs, they would seek the cheapest supplies available. This might still be electricity from Scotland but, if they thought that too expensive, they would have the options of additional investment in England and Wales or of importing supplies through the interconnectors with the Continent. However, by the time Scotland becomes independent, if it does, onshore wind power could be virtually economic without subsidy. Bloomberg New Energy Finance has forecast that onshore wind power will be economic by the second half of this decade, as a result of economies of scale and improvements in technology. This accords also with the view given by various experts in evidence to the House of Commons Select Committee on Energy and Climate Change. But much will depend on how prices move for other forms of energy, an issue which I touch on later. Offshore wind generally causes people (with the exception of Donald Trump) less environmental concern over its visual impact and has a better load factor but is, at present, much more costly. Here too costs are likely to come down in time as technology develops but for, the foreseeable future, investment in offshore wind is unlikely without significant subsidy. In the meantime, there are further substantial developments planned for wind farms. If the aim were to replace the two nuclear stations with wind farms as they come to the end of their lives, a great deal of additional capacity would be required. Although the present wind and wave capacity (3,016 MW) is theoretically greater than that of the two nuclear stations (2,653 MW), with a load factor of only 27 per cent, it would be far from enough. Just to match the annual output of the nuclear stations, something like a tripling or quadrupling of wind capacity would be needed. It is claimed that Scotland has potential for 11,500 MW of onshore wind farm capacity and even more, about double this amount, offshore where the load factor is better but the cost is higher. But these figures for potential capacity take no account of the intermittent nature of the supply. Obviously a lot of Scotland would be covered with wind farms and the present hostility to them would, understandably, become very much more intense. There is also concern about their effect on tourism because of their impact on the landscape. These worries may well be justified but they could become much more serious if there is a huge amount of additional development. There would then be strong pressure on politicians and local planning authorities to oppose it. In my view, it is essential, if wind farm development is to be acceptable, for local communities to get benefit from it. Here the small development of three turbines by the island community of Gigha is an interesting example of how this can be done. The development by Viking Energy, a joint venture of Scottish and Southern Energy and the Shetland Charitable Trust, is another. This is a large development and it now has planning permission, although opponents against it are still active; it will consist of 103 turbines with a capacity of 370 MW. The advantage is that Shetland has far more wind than mainland Scotland, as demonstrated by the existing small Burradale wind farm, which has a load factor averaging 52 per cent. The output of Viking Energy would be far in excess of Shetland's needs and the intention is to export it by cable to the mainland. Such a cable, however, would itself be an ambitious project, which still has to be financed and built. Nevertheless, it is estimated that the Shetland Charitable Trust could get an income from its investment in Viking Energy of over £20 million a year, dwarfing even the substantial income the islands get from North Sea oil. There are ways in which the variability of supply can be mitigated. The hydro stations can complement wind power, as already mentioned, and the additional pump storage schemes to be built by Scottish and Southern Energy will add considerably to this flexibility, making power available when required and when the output from wind is low. There is scope for some further hydroelectric development and pump storage capacity if needed. The development of other forms of renewable energy may also help to mitigate the peaks and troughs of wind energy. A great deal of work is being done on the development of wave and tidal energy – especially in Orkney, where the tidal flow in the Pentland Firth is exceptionally strong. Tidal energy in particular, although intermittent, is much more predictable than wind energy. But, although these developments offer promising prospects and the resource is reckoned to be considerable, the technology is in its infancy and it will be many years yet before it is able to be developed on a commercial basis. The pioneering PURE project in Shetland, which uses two small wind turbines to make hydrogen, could also be a way of storing energy when it is not required and to make it available when it is. Manufacturing hydrogen from wind power is also of interest in providing energy in more remote areas, where costs of linking to the grid are high; Western Isles Council aims to follow the example of Iceland in using hydrogen to power its public transport. But perhaps the best way to tackle the intermittency of wind power is by extending the grid, on the principle that the wind will always be blowing somewhere – in Shetland, Orkney or the Western Isles, if not on the Scottish mainland. It is with this in mind that the First Minister has had discussions in Norway about an undersea cable linking the two countries. The problem with this would probably be the cost but it is worth investigating further. Already, in addition to the grid interconnectors linking Scotland to England and Ireland, England is linked to the European continent from which it is a net importer. The Viking Energy project in Shetland will test the viability and cost of a long undersea cable. The company's assessment indicates that the cost of the cable would not destroy the viability of their project. There has also been a recent report of a proposal to run a cable from Iceland to the Scottish mainland to deliver geothermal electricity, which is abundant and cheap, from Iceland's volcanoes. Apparently this was considered some years ago and rejected on cost grounds. It would require a cable of 1,000 kilometres, the longest sea cable in the world, and go through much deeper water than the cable from Shetland but Landsvirkjun, the Icelandic electricity producer, believes that it may now be viable. If it went ahead, it could provide a valuable means of balancing any irregularity in production from Scotland's renewable energy. These proposals highlight the importance of access to the grid at reasonable cost. There has been much complaint about this, with renewable power sources in the north and west of Scotland being expected to pay much more for a connection than power sources nearer the market in the south of England. This is, of course, an economic matter – absence of long distribution lines involves less cost to the grid and means that power sources nearest to the market would expect to pay less than those furthest away, especially if there is a shortage of power in the south and a surplus in the north. But the cost of connection to the grid could make some of the best renewable sources of power uneconomic to develop. This therefore needs to be subjected to rigorous scrutiny to ensure that these costs are reasonable and can be justified. The report of the Royal Society of Edinburgh's Inquiry argued for a mixture of electricity suppliers so as not to be too reliant on one source, bearing in mind the uncertainties that there are. A proposal from Scottish Power for an experimental carbon capture plant at the Longannet coal-fired station was rejected by the UK government on grounds of cost. But a proposed carbon capture and storage (CCS) plant by Shell and Scottish and Southern Energy linked to the Peterhead gas-fired power station together with a proposal from a consortium at Grangemouth are on the shortlist and await an early decision. These would inject carbon into oil fields in the North Sea. If the technology succeeds, it would provide a way of making emission-free energy from fossil fuels, which could be very important for the future. But, even if it is successful, the costs for the foreseeable future are likely to be high. So the future for energy supplies in Scotland has plenty of promise. Scotland undoubtedly has exceptional renewable resources and the technology will continue to develop. There would seem to be two main dangers. The first is a public backlash if the view strengthens that ever-increasing wind farm development is damaging the landscape; but it is fair to say that it is much easier to decommission a wind farm after 25 years than to decommission a coal-fired station let alone a nuclear plant. The second is that the wider market for Scotland's renewable power resources may not materialise as hoped, for instance because grid connections prove too expensive to finance or heavy investment in English shale gas removes the pressing need for additional electricity from renewable sources. **The 'Fracking' Question** Already the hydraulic fracturing revolution, known as 'fracking', has resulted in a huge drop in gas prices in the United States, where they are now less than half the price for gas in Europe. It apparently holds out the prospect of the United States becoming once again self-sufficient in gas and oil. Indeed, the International Energy Agency has forecast that the United States will overtake Saudi Arabia in oil production by 2020, as a result of oil from fracking. If so, this will affect the international oil price, but gas markets are much more local, as gas is less easily traded. If similar developments take place in Europe, however, it could affect gas prices in Britain, making the generation of electricity by gas powered plants, already probably the cheapest form of electricity generation, very attractive. This would be a cleaner form of energy than coal, but it would still release about half the amount of carbon of a coal-fired plant. If it were developed in place of clean energy, therefore, it would not enable Britain to meet its carbon reduction targets. But it could, through bringing down energy prices, make it more difficult to produce electricity economically from Scotland's renewable sources. This could obstruct investment in new renewable capacity, although projects already built would presumably continue to generate electricity since the marginal cost of wind power is close to zero. At present, the scale of potential gas supplies from fracking in Britain, or indeed in Europe, is largely speculative. The exploration company Cuadrilla Resources has drilled three wells in Lancashire and is now drilling a fourth. But it has not, so far, been allowed to fracture and cannot flow test the wells to discover how productive the source is. In evidence to the House of Commons Select Committee on Energy and Climate Change, however, the company has said that it estimates the resource of the Bowland Basin at 200 trillion cubic feet of gas (tcf). Not all of that will be recoverable but, even if only a small fraction is recovered, it will still be substantial, given that, in the peak year of 1999, output of gas from the North Sea was 4 tcf. At present, the uncertainties surrounding shale gas are probably similar to those affecting North Sea oil when it was first discovered and it would be wrong therefore to put much weight on any of the estimates. But the resources in Lancashire seem promising as they come from shale more than a mile thick, which Cuadrilla Resources say is probably unique and is thicker than the shale being exploited in the United States. There is also a prospect of gas from fracking shale in Scotland and from parts of the North Sea where there is shale that was discovered in drilling for oil. Existing installations for oil in the sea could reduce the infrastructure cost of exploiting these reserves but at present that is highly speculative. The importance of all this for energy policy in an independent Scotland is simply to emphasise the uncertainties. The huge drop in gas prices in the United States, as a result of the fracking revolution, is making industries that had become uncompetitive in international markets competitive again. Might it do the same in Europe? At this stage, one cannot tell but informed opinion seems to think it unlikely for a variety of reasons: there is likely to be much stronger resistance to development in densely populated parts of Europe, such as Lancashire, than in the sparsely populated areas of the United States, where much of the development is taking place; and, under European law, minerals underground are the property of the state, whereas in the United States they are the property of the landowner, who stands to benefit directly. This acts as a driver for development. In the UK, the most that seems to be suggested at present is that shale gas might be enough to stop dependence on imported gas from increasing further. For Scotland, this means that, if the UK is to meet its climate change targets, the Scottish renewable energy resources will still be needed, unless alternative green energy can be found. As argued above, the subsidy on wind power is decreasing and the need for it may be eliminated for onshore wind by the end of the decade. But subsidy is likely still to be necessary for offshore wind and will certainly be needed for wave and tidal power, which are still only in the very early stages of development. If Scotland becomes independent, would consumers in the rest of the UK be willing to continue to pay the necessary subsidy for Scottish green energy? Might they find alternative green energy sources or would their lobbying make UK politicians abandon their carbon targets and simply go for fracking to get cheap gas? #### 7 #### **North Sea Oil – the Mishandling of an Opportunity** Because economic issues have featured so prominently in the case for independence as set out by the SNP, the revenues from North Sea oil and gas have become a major part of the debate. As was shown in Chapter 1, Scotland, like the UK as a whole, has had a substantial budget deficit since the banking crisis of 2008 and the ensuing recession. Although the First Minister and others have argued that Scotland's deficit in the last few years has been proportionately less than that of the UK, this has certainly not always been the case and it depends on the assumption that some 90 per cent or so of the oil and gas revenues would accrue to Scotland as an independent country. This is based on the median line as estimated by Alex Kemp and Linda Stephen of Aberdeen University, both highly respected researchers on oil and gas. But, as I pointed out in Chapter 1, there are a fair number of assumptions involved in the First Minister's calculation, which cannot be taken for granted. The reason this is so important is, as I also explained in Chapter 1, that Scotland has, and has had for many years, a higher level of public expenditure per head than the UK average. An assumed geographical share of the oil and gas revenues in the last few years would approximately compensate for this, though not to the extent of eliminating all of the present deficit. So the issue is whether these revenues can be relied on to continue at this level, or at least until, by some means, the economy can be made more productive, so that non-oil tax revenues are increased. In 1974, when I was Chief Economic Adviser at the Scottish Office, I wrote a paper for Ministers which was obtained a few years ago under freedom of information. This paper, which sparked some controversy, was written as confidential briefing for Ministers at the time of the 1974 election. I argued that the then outgoing government in their public statements had underestimated the scale of the developments in the North Sea and that the revenues were likely, by 1980, to be very large. I went on to argue that the scale of these tax revenues made it no longer tenable to say that an independent Scotland could not manage financially. Scotland's economy, at that time, was in a worse condition than it is now, with many of its industries in difficulty. The paper was intended as something of a wake-up call for both Ministers and their officials, some of whom had not realised the importance of what was happening in the North Sea, and urged that stronger action, through regional policy and by other means, was needed to help the Scottish economy. Everything I said in the paper turned out to be right about the scale of the development and of the revenues. But the paper has become rather notorious and it has been claimed it was suppressed. That was not the case. Confidential briefing for Ministers is never published and, had I published it on my own initiative, I would have been breaking every rule and would have been in serious trouble. I did not think that what I was saying was so earth shattering. The paper was based on information mostly from public sources and at least one newspaper was making the same arguments. **Graph 1** _UK Crude Oil Production_ **Graph 2** _UK Gas Production_ But that was 1974, and the situation was different then. Gas was already flowing to England from the southern basin of the North Sea and the network was being extended to Scotland. But oil production from the northern North Sea did not start until 1975 and only became substantial after 1980 (see Graphs 1 and ). The big hikes in international oil prices, first in the mid 1970s and again in 1979, meant that, when oil production grew in the 1980s, the revenues generated for government were very large indeed, especially in the early part of the decade before the sharp fall in international oil prices after 1984. Oil prices have risen again from their low point in the 1990s and, in recent years, have been very high (Graph 3). Gas prices are much less volatile – although gas is being increasingly traded internationally, the greater difficulty in transporting gas means that, whereas the oil market is international, the gas price depends much more on local markets. **Graph 3** _Crude Oil Prices_ **Production and Outlook for Offshore Oil and Gas** What matters now is the present position and how it is likely to develop. Graphs 1 and 2 show that North Sea oil production peaked at 137 million tonnes in 1999 and offshore gas production at 1,260 terawatt hours (TWh) in 2000. By 2011, crude oil production was down to 52 million tonnes and offshore gas to 326 TWh – both less than half peak output. From 1981 to 2004, exports of crude oil from the UK had exceeded imports but, by 2011, exports were only equal to slightly over half the amount imported. That does not mean that offshore oil and gas are unimportant – their life has been extended beyond the original estimates, as more has been discovered, and it is now likely to last for at least another 30 or 40 years. How much will be produced and how long it will last depends on what new discoveries are made and on improvements in technology that enable a higher proportion of the oil and gas to be extracted from existing wells. Oil & Gas UK announced in April 2013 that production is expected to increase to some 2 billion barrels of oil equivalent in 2017 compared with 1.5 billion in 2013. This follows a big increase in investment by the oil companies, notably by BP in its Clair Ridge project, a major field that should be in production till 2050. Nevertheless, over the long term, the decline in output of both oil and gas is expected to continue at a gradual rate, despite these developments. Alex Kemp, the official historian of North Sea oil and gas, said at a recent hearing of the House of Commons Select Committee on Energy and Climate Change that this decline might be at a slower rate than in the last few years and oil output might stabilise for a period but he did not expect the decline to be reversed. The tax revenues, however, depend not just on the volume of oil and gas produced but also on international oil prices and on the profitability of the companies producing the oil. Tax revenue is obtained from both oil and gas but is much greater from the production of oil because price depends on international rather than local markets. The most obvious feature of oil prices, however, is their volatility. After the high levels of the 1970s and early 1980s, the oil price fell sharply and remained low in the early 1990s. Today, the price is high again, although not as high as a few years ago. Gas production rose steadily to its peak in 2000 and oil production fell after 1985 but rose again thereafter to its peak output in 1999. The consequence of these price movements coupled with the trends in output was that tax revenues which had been very high in the early 1980s, reaching £12 billion in 1984/85, fell to around £1 billion in the early 1990s before rising again to a peak of £12.9 billion in 2008–09 (Graph 4). This latter peak, though high, is not actually nearly as high in real terms (allowing for inflation) as it was in 1984/85. If the revenues are recalculated using constant 2008/09 prices, they would have been £28 billion in 1984/85 (Graph 5). **Graph 4** _North Sea Tax Revenue at Current Prices_ **Graph 5** _North Sea Tax Revenue at Constant Prices_ What is likely to be the level of tax revenue in future? The independent Office for Budget Responsibility (OBR) has estimated a sharp fall from £11.3 billion for the UK in 2011–12 to £7.4 billion in 2012–13 and to £4.5 billion in 2017–18. This is based on forecasts of output, prices and profits, all of which are inherently extremely hard to predict. It is probably as good a forecast as one can get but inevitably it is subject to a wide margin of error. Many people expect oil and gas prices to remain high or even go up higher. This may well turn out to be what happens if China and India continue to develop at the speed of recent years. Because both are such huge countries, demand from them is likely to have a major impact on the market, pushing up prices of many raw materials, including oil. Uncertainty in the Middle East, the world's largest exporting region, is also something that cannot be discounted. On the other hand,the fracking revolution referred to in the last chapter has already greatly reduced gas prices in the United States and will affect the oil price too, especially if, as predicted, it makes the United States virtually self-sufficient in gas and oil. We have yet to see what impact fracking may have on European markets. So all we can really say is that future prices are uncertain. My own view is that they are more likely to go up than down over the longer term but the only thing one can be sure about is that they will be volatile and any forecast is likely to be wide of the mark. The revenues from oil and gas also depend on the cost of producing it. As the most productive sources are depleted, it is to be expected that costs will go up. More marginal resources will be brought into production. Indeed, the large Clair field, west of Shetland, which is in water depths of over 500 feet and is now being developed, was earlier thought to be uneconomic. And, if advances in technology enable a higher yield to be obtained from existing wells, that too is likely to be at a cost. So, even if the oil price remains high or goes up further, it does not follow that profits and hence tax revenues would remain as high. At the House of Commons hearing already referred to, Fergus Ewing, the Minister in the Scottish government responsible for energy, was asked about decommissioning costs when the rigs and platforms in the North Sea have to be removed. Professor Kemp had already said that these costs would have to be met by the oil companies when decommissioning started and that they would be a charge against their profits, thereby resulting in lower tax revenue. Mr Ewing, on the other hand, seemed to be arguing that, if Scotland became independent, the rest of the UK should meet part of the decommissioning costs because, for part of their lives, many of these fields had been generating profits for the UK. This would presumably mean that the Scottish government would expect the government of the remainder of the UK to pay some compensation for the reduced tax revenues accruing to the Scottish government after decommissioning starts. This strikes me as a very hard one to sell and one that would be bound to be resisted – especially as both the First Minister and Mr Ewing repeatedly assert that Scotland, with its share of the North Sea, would be the sixth wealthiest country in the world (although, as we have seen in Chapter 1, this is based on GDP per head, which is not a good measure of wealth as it includes oil company profits received by shareholders, many of whom are not resident in Scotland). **The Case for an Oil Fund** The First Minister has argued for the setting up of an oil fund on the Norwegian model and has suggested that, if £1 billion was paid into this fund each year, it could be worth £30 billion in 20 years time. This would, of course, depend on whether it was possible to set aside £1 billion a year, when it would start and what rate of return could be expected. Ministers have tried to clarify the aim, saying that payments would be made into the oil fund as soon as fiscal conditions allow. This is a laudable aim – one that I strongly favour and argued for in the 1970s. In a paper that I wrote then, I made two main points: first that there was a danger of a sharp rise in the UK exchange rate in the 1980s, as oil production got under way and replaced imports, which could damage the rest of the economy; and second that part of the tax revenue should be paid into a special fund. As Alex Kemp explains in his _Official History of North Sea Oil and Gas_ , this was considered seriously in the 1970s. What Ministers in the then Labour government had in mind, however, was not so much a fund that would accumulate, as the Norwegian fund has done, but a fund to finance capital expenditure, such as key infrastructure projects or the modernisation of industry. This was to include an emphasis on regional development in Scotland, Wales, Northern Ireland and the Development Areas of England. In the end, however, a majority of the Cabinet were against it. It should be remembered that, when this was considered, it was February 1978. The state of the UK economy was extremely difficult, with high inflation and balance of payments difficulties, and North Sea revenues had not yet begun to flow in any substantial quantity. Although Alex Kemp reports that there was further discussion in the Treasury in the 1980s, the issue does not appear to have been considered by the Conservative government collectively. Instead, the effect of the oil and gas production on the balance of payments, coupled with the very tight monetary policy being pursued at the time, was to push the exchange rate up dramatically. The pound, having been trading at $1.60 in the late 1970s, rose to $2.40 in the early 1980s, with catastrophic consequences for much of manufacturing industry. In effect, the tax revenue from the North Sea ended up paying for the resulting unemployment. The decade of the 1980s was the time when an oil fund should have been set up. If payments had been made into it then, continued in subsequent years and allowed to accumulate like the Norwegian fund, it would, by now, have been worth a huge amount of money. The Norwegian fund was established in 1990 and receives the state's total cash flow from petroleum activities. It is the largest wealth fund in Europe, worth now some £330 billion, dwarfing the country's national debt and amounting to 70 per cent more than the whole output of the Norwegian economy in one year. The fund is required to invest this money abroad rather than in Norway, to counteract the effect it would otherwise have on the balance of payments and exchange rate. Much of the income from these investments also accrues to the fund and is reinvested, which accounts for its remarkable growth. The UK has, of course, a much larger economy but a fund of this kind would have ensured that there would have been no doubt about the UK's credit in the present economic crisis and would have saved us much of the misery from the austerity we have been suffering. That such a fund was not set up, when it could have been, was very short-sighted and, in my view, a tragedy. It amounts to a serious mishandling of the greatest economic opportunity the UK has had in the last two decades of the 20th century. If it becomes independent, would Scotland be right, even now, to try to set up an oil fund? What I take to be the aim of SNP Ministers is something more along the lines of the Norwegian fund than the sort of fund the British government was considering in the 1970s. Professor Kemp, in his House of Commons committee evidence, said, yes, it would be right, because using the oil revenues to pay for current spending is running down a capital asset, albeit a naturally endowed one, to pay for what should be funded by ordinary tax revenues. The Nobel Prize-winning economist Professor Joe Stiglitz has said the same. The asset will disappear with nothing to replace it. I agree. It is just that it would be very difficult to do, when Scotland has as big a deficit as it has at the moment. If Scotland became independent and the oil revenues were immediately diverted to a special fund, the rest of the budget would be heavily in deficit. That would mean that there would have to be big tax increases or public expenditure cuts on top of what the coalition government has already imposed. The Scottish economy would be pushed into an even worse recession and the level of unemployment would rise even further. I do not think that is practical. So I agree with the Scottish government's declared policy of putting the oil revenues into a special fund as soon as fiscal conditions allow. That should not be taken as a licence to put it off indefinitely, but as the economy's growth looks likely to be feeble for some years, it may be a considerable time before reducing the deficit would enable significant amounts to be set aside. **The Shetland and Orkney Oil Funds** The only parts of the UK with the foresight to set up oil funds from which they could benefit were the Northern Isles. Shetland pioneered this arrangement. Under the leadership of its then Chief Executive, Ian Clark, the council set up a fund into which oil companies were required to make a 'disturbance' payment for oil passing through the terminal at Sullom Voe. It required the oil companies to share a properly planned common user terminal, which some of them had been reluctant to do, and because so much – probably nearly half – of the oil was in waters for which Shetland was the nearest landfall, the amount of oil going through the Sullom Voe terminal was very large indeed. The proceeds, based on the throughput of oil, have been paid into the Shetland Charitable Trust, which now has assets in excess of £210 million and income able to finance expenditure of approximately £11 million a year. Orkney Islands Council followed the example of Shetland with a smaller fund based on the oil flowing through its terminal on the island of Flotta. Shetland has used its money for a variety of charitable purposes of benefit to the local community. There are first-class leisure centres in all the main population settlements, there is exceptionally good care for the elderly, including specially built homes and visiting day carers, which reduce the burden that would otherwise fall on the NHS. There is investment in property to let and in a district heating scheme, both of which yield a return. The Trust has also part-funded the excellent Shetland Museum in Lerwick. But perhaps most significant of all will be the investment, along with Scottish and Southern Energy, in Viking Energy's proposals for the large wind farm referred to in the last chapter. If this goes ahead, it could give further major financial benefit to the islands. With so much of the oil in Shetland waters, Tavish Scott, the MSP for Shetland, has pointed out that his constituency is in a very strong position in the Scottish independence debate. Together with Liam McArthur, the MSP for Orkney, he has submitted a paper to the UK government emphasising the distinctive position of the Northern Isles. The islands were incorporated into Scotland by an Act of the Scottish parliament in 1472 but they have their own separate identity, of which they are very aware, stemming from their Norse heritage. There are some in London who have suggested that, if Scotland becomes independent, Shetland might prefer to stay as part of the United Kingdom. I have never thought this likely but, as Tavish Scott says, the Shetlanders are in a strong bargaining position if they care to use it. The centralising tendencies of Scottish governments since devolution are not welcomed in the islands and Shetland is conscious of the advantages its neighbours, the Faroe Islands, have as a dependency of Denmark rather than an integral part of the Danish state. In this respect, they are analogous to the Isle of Man or the Channel islands, which are British dependencies. The Faroe Islands are not part of the European Union, although they have unrestricted trade access to it. This has enabled them to retain control of their own fishing policy, an industry on which the islands heavily depend, and which is also of great importance to Shetland. Brian Wilson, the former MP and Energy Minister, writing in _The Scotsman_ , is clearly aware of some of the feelings in Shetland and, with his long association with the Western Isles, has suggested that all three island groups have something to gain from a change to their status. At the time of writing, it would not surprise me if we hear a good deal more of this. As Scotland moves towards the referendum, it would seem right, both for those in favour of independence and for those against, to give some thought to this issue, if indeed there proves to be a demand for change. With so much of the oil in the waters off these islands and their great potential also for renewable energy, it would be a huge mistake not to take the matter seriously, whether Scotland becomes independent or not. **Conclusion** The main point that has emerged from the discussion in this chapter is the great uncertainty surrounding so many of the issues relating to North Sea oil and gas. We know now that the resource is likely to last much longer than was originally thought and to remain important for many years yet. But we also know that output is past its peak and declining gradually. The price of oil in future international markets is very uncertain. There was an expectation that it might continue to rise as a result mainly of rising demand, especially from the Far East and other developing countries, and that may still prove to be right but the fracking revolution could make that less certain. All that we can be really sure about is that the future is impossible to predict but that prices are likely to be very volatile, just as they have been in the past. This volatility will affect the taxation revenue that an independent Scotland could expect to receive. Will a rising oil price compensate for a reducing output? This uncertainty could make it very difficult for those who would have to manage the government's budget. The proposal to set up an oil fund is to be commended but it would be very difficult at present to put any revenue aside without either raising taxes or cutting public expenditure further than it is being cut already. It is only realistic to expect that there will be a lot of pressure to use the money for other pressing needs. As far as the Northern Isles – the only part of Britain that has been wise enough to set up such a fund – are concerned, it would be a great mistake to ignore any aspirations they may have to resist the tendency to increased centralisation or for some change to their status. The islands are of critical importance to Scotland, whether it becomes independent or remains part of the UK, both for their key position in relation to offshore oil and also their huge potential for renewable energy. #### 8 #### **Welfare and Inequality** Government expenditure on social protection is the largest single programme in Scottish public expenditure, as it is also in the UK. In Scotland it cost over £21 billion in 2011/12, 38.4 per cent of all identifiable public expenditure, compared with almost £19 billion for health and education combined (see Table 1). It includes the State Pension and benefit expenditure for the disabled, the unemployed and those with low incomes, as well as Housing Benefit and care for the elderly. Over 70 per cent of this expenditure, some £14 billion is the responsibility of the UK Department for Work and Pensions (DWP) and therefore not devolved; and rates of State Pension and benefits are therefore the same throughout the UK. Of the remainder, less than £1 billion is paid directly by the Scottish government and some £5 billion by Scottish local authorities. Given its huge cost, it is obviously important to consider where the main responsibility for welfare should lie – whether with the UK Parliament, as it is now, or with the Scottish government and Parliament. Although the Scottish government's role in social protection is at present limited, much of the expenditure for which Scottish Ministers are responsible is closely related. Health, for example, is a Scottish government responsibility, as are education and skill training and housing. The Scottish government pays for Free Personal Care but Attendance Allowance is paid by the DWP. Scottish Ministers in the present government have said they would like complete responsibility for welfare, which they would have, of course, with independence or Devo-Max but not with most of the other proposals put forward for devolution. And the findings from the 2012 Scottish Social Attitudes Survey showed that nearly two thirds (64 per cent) of Scots think that benefit levels should be the responsibility of the Scottish Parliament. **Table 1** _Scottish Identifiable Public Expenditure 2011–12_ | **£ million** | **percentage** ---|---|--- General Public Services | 1,093 | 2.0 Public order and safety | 2,416 | 4.4 Economic Affairs | 4,961 | 8.9 Environmental Protection | 1,056 | 1.9 Housing and Community | 1,719 | 3.1 Health | 10,989 | 19.8 Recreation, culture and religion | 1,224 | 2.2 Education and training | 7,702 | 13.9 Social Protection | 21,323 | 38.4 Accounting adjustments | 2,999 | 5.4 **Total** | **55,481** | **100** _Source:_ Government Expenditure and Revenue Scotland 2011–2012 _, March 2013_ Expenditure per head in Scotland is above the UK average by some 7 per cent and is expected, for demographic reasons, to grow more rapidly, as Professor David Bell has shown in a paper for the David Hume Institute. This is mainly because the proportion of the Scottish population over age 65 is higher than in the UK as a whole and is increasing faster but also because there is a higher proportion drawing benefits for illness or disability, which tend to increase with age. Thanks to advances in medical care, many more people in all developed countries, including Scotland, are living longer and this involves increasing cost. But the ratio of working population to dependents is the key issue. Although recent figures show that the years of net emigration from Scotland seem to be behind us and the population is growing, it is doing so more slowly than in the UK. This is partly because there has been less immigration to Scotland than to other parts of the UK and also because immigrants tend to have larger families. In view of its scale, how money is spent on welfare matters a great deal and there is an obvious need to make it as cost-effective as possible, especially at a time when public expenditure throughout the UK is being cut. This applies to Scotland at least as much as to the UK. Several interesting points emerge from the breakdown of the expenditure in Tables 2.1 and 2.2, some of which may be surprising to those who have regarded welfare benefits as something that could and should be cut – the State Pension accounts for 45 per cent of the total DWP expenditure and, if other items that are age-related are added, such as Pension Credit, Attendance Allowance and Disability Living Allowance (DLA) for pensioners, it is more than half. DLA itself, including not only pensioners but children and those of working age, accounts for almost 10 per cent, and Incapacity Benefit together with Income Support for those drawing this benefit a further 7 per cent. Housing Benefit accounts for 12 per cent. These are the largest items. The cost of unemployment, if Jobseeker's Allowance and Employment and Support Allowance are taken together, is only around 6 per cent. Among the items for which the Scottish government is responsible(see Table 2.2), the cost of free prescriptions is very small, when compared with these other items, and the cost of concessionary travel is not very large either but the cost of Free Personal Care and Free Nursing Care is significant and is expected to increase considerably as the population ages. **Table 2.1** _Expenditure by UK Department for Work and Pensions on Benefits in Scotland 2011–12_ | £ **million** | percentage ---|---|--- Attendance Allowance | 481 | 3.4 Bereavement Benefit/Widow's Benefit | 59 | 0.4 Carer's Allowance | 153 | 1.1 Council Tax Benefit | 384 | 2.8 Disability Living Allowance | 1,372 | 9.8 _of which children_ | _109_ | _of which working age_ | _774_ | _of which pensioners_ | _488_ | Employment and Support Allowance | 381 | 2.7 Housing Benefit | 1,728 | 12.3 Incapacity Benefit | 564 | 4.0 Income Support | 670 | 4.8 _of which on Incapacity Benefit_ | _418_ | _of which lone parents_ | _190_ | _of which carers_ | _34_ | _of which others_ | _28_ | Industrial Injuries Benefits | 93 | 0.7 Jobseeker's Allowance | 461 | 3.2 Maternity Allowance | 24 | 0.2 Over 75 TV licences | 49 | 0.4 Pension Credit | 752 | 5.3 Severe Disablement Allowance | 97 | 0.7 _of which working age_ | _75_ | _of which pensioners_ | _21_ | Statutory Maternity Pay | 197 | 1.4 Winter Fuel Payment | 188 | 1.3 State Pension | 6,325 | 45.2 Total | 13,978 | 100 _Source: Department for Work and Pensions, Expenditure tables, as revised April 2013_ **Table 2.2** _Scottish Government Expenditure 2011–2012_ | **£million** ---|--- Concessionary travel | 249 Free prescriptions | 57 Free Personal Care | 427 Free Nursing Care | 23 _Source: David Bell's paper 'Social Protection in Scotland' given to the David Hume Institute_ **The UK Government's Reforms** The UK government's reforms to welfare are driven not just by the need to control expenditure; there are also serious faults in the present system. There are a bewildering number of different benefits, as more have been added to over the years. This can be confusing to claimants and involves having to fill in numerous claim forms, which some deserving people, especially those with serious disabilities, find difficult. According to the DWP, this can result in some people getting less benefit than they are entitled to and gives scope for fraud. But that is not the only problem – those taking low-paid jobs can lose as almost as much in benefit, when they start employment, as they gain from earnings. This poverty trap has been a problem for many years; it can discourage those on benefit from taking jobs, if the pay they will receive is not significantly more than the benefit they will lose. Everyone has heard anecdotal evidence, whether reliable or not, suggesting that there are some people on benefit who could and should be working. This may well be so, especially if the only job available to an unemployed person involves work which they regard as unpleasant and with a pay that makes them little, if any, better off. I have never been able to understand why some politicians have so strongly argued the case for incentives for the better off, such as businessmen and bankers, while at the same time ignoring the need for incentives for poorer people. For all these reasons, the need for reform was widely accepted. Given the scale of the task, however, with so many existing types of benefit and tax credits, it is a formidable undertaking and tackling it is bound to throw up unexpected problems. Nevertheless this is what Mr Duncan Smith's Universal Credit is intended to achieve. Reform would be needed just as much in an independent Scotland. But for such a reform to be acceptable, the gain must be clearly seen to outweigh the loss from the inevitable upheaval; and the present time is, in many respects, the worst time to be attempting it. It would be much easier when the economy is buoyant than at a time when few jobs are available and the Treasury is determined to achieve savings, even if it causes much resulting hardship. According to the government's updated Impact Assessment of December 2012, the replacement of the many existing benefits by Universal Credit, which will be phased in from the autumn of 2013 until completion in 2017, should actually increase payments to households by £0.3 billion.Most of those gaining, it is claimed, will be among the poorest groups in society. But some 2.8 million households will receive less benefit. So, even if there are more gainers than losers and the gainers are those most in need, the effect on many people will be very painful. On top of this, the Welfare Benefits Up-rating Act of 2013, which will limit the rise in benefits, most tax credits and Universal Credit to only 1 per cent a year, while inflation is running well above 2 per cent, will result in a much larger number falling into poverty. According to a report for the Scottish Council of Voluntary Organisations, hardship will be increased for many people of working age who are already struggling. In addition to this, there will be a benefit cap of £500 a week for a couple or lone parent and £350 a week for single people. This will apply to all benefits except Disability Living Allowance, War Pensions or Working Tax Credit. There is a real danger that what is an ambitious and necessary reform will be seen just as a savage attempt to save money. Although Universal Credit only starts to be implemented in the current year, the indications are not good. Disability Living Allowance (DLA), which will not be part of Universal Credit, is not means-tested but many people have simply had their benefit stopped and been made to reapply, no matter how serious or permanent their disability. This is causing great distress, even if benefit is eventually restored, as was shown when a blind man with heart trouble and diabetes, whose benefit had been stopped, gave evidence to the Scottish Parliament. Both the newspapers and television have carried similar stories of cases where the stopping and then reassessment for DLA has caused immense distress. In some of the cases one hears about, the person is so obviously unable to work that one wonders why the benefit was ever stopped. DLA is to be replaced by a Personal Independence Payment (PIP) which, like DLA, is not to be means-tested but the budget for it is being cut by 20 per cent and all claimants will have to go through a reassessment process, which will be repeated at intervals. This process itself involves considerable cost to the taxpayer and one may question if it is necessary where a person has a permanent disability. More than 60 per cent of those on Incapacity Benefit, which was subsumed into Employment Support Allowance (ESA) in 2008, have also had their payments stopped. Although quite a high proportion of them also get their benefit restored on appeal, as Martin Sime, chief executive of the Scottish Council for Voluntary Organisations, has said, the effect on many poorer people is likely to push them to despair. There is also concern about the proposal for cutting Housing Benefit if the claimant is assessed as not needing as much accommodation as their current dwelling provides – the so-called 'bedroom tax'. It is understandable that the state should not pay for more accommodation than is needed. But, unless the assessment is done with care, it can give distressing results. Cases have arisen where a person is told that they have one more bedroom than needed, although a carer uses that bedroom on frequent needed visits. And it makes no sense to cut someone's benefit and tell them they have to move to smaller accommodation if such accommodation is not available in the neighbourhood where carers and others who look after them live. Inevitably, the cost to the country of welfare benefits goes up when the economy goes into recession. People lose their jobs and swell the ranks of the unemployed, just as tax revenue falls. Many of those who do manage to get work find that they have to take part-time work or jobs that are less well paid than they had before. Even if not unemployed, they may be drawing benefit in the form of Income Support. There seems to be a widely held view that public expenditure on welfare is excessive and it is certainly large but, according to David Bell's analysis, if expenditure on health is included, the UK comes approximately in the middle of the range for European countries – not only the Scandinavian countries but France, Germany, Italy, the Netherlands and Belgium all spend more on social protection as a share of their GDP. The best way of reducing the country's bill for benefits would be to get out of recession and back to full employment, though that would do little to reduce the cost of welfare benefits for older people, which account for around half of the total cost. One of the welcome features of the reform, however, is to reduce the poverty trap by revising the tapering of benefits when a person taking a job starts to earn an income. The present taper can result in a person losing the equivalent of more than 90 per cent of their income. In future, this 90 per cent taper will be reduced to 65 per cent. This is certainly an improvement but it still means that a person taking a poorly paid job could lose more than half of their income. Reducing the taper is very expensive, as it involves paying out benefit when previously it would have stopped and it must be one of the reasons the cost of the welfare policy remains so high. But, unfortunately, at a time when the economy is in recession and job opportunities are extremely scarce, it is fanciful to suppose that this will enable many of those who are unemployed to get a job, even if they try their best to find one. **Poverty** The Chancellor unwisely castigated many of those drawing benefit as shirkers and contrasted them with the strivers who found a job and went out to work. The truth is, however, that there are 6.1 million in work drawing benefit because, with low incomes, they are still in poverty and they outnumber the 5.1 million who are not working at all. Since the late 1990s, according to research done for the Joseph Rowntree Foundation, there has been a welcome reduction in the numbers classified as living in poverty. This was the result of the high level of employment before the financial crisis and of measures, such as tax credits and increases in benefit, introduced by the last government. The Institute for Fiscal Studies calculated that, for lone parents in work, there was an increase in income of 12 per cent, while the proportion of households receiving out-of-work benefits fell by a third. It had been the government's stated aim to end child poverty by 2020. Obviously the recession put an end to that but the prospect now is for the numbers in poverty, both adults and children, to increase. Over the last thirty years, inequality in our society has greatly increased, as it has also in the United States and many other countries. The better-off, especially but not only in the financial sector, have been able to increase their incomes enormously, while those on the lowest earnings have gained much less and, in some cases, hardly at all. This is a consequence of globalisation, technical change and, in the financial sector, of deregulation. As poorer countries have developed, especially in the Far East, cheap goods have come to Western markets that have kept prices down and forced many manufacturing firms in Europe and North America either to give up production or to reduce costs by restraining the growth of wages so that they can compete. This phenomenon is also described as the 'disappearing middle' – the loss of skilled manual and lower management jobs through computerisation, more advanced capital equipment and competition from abroad. This runs counter to the kind of Scotland that many people would like to see. With the dominance of SNP and Labour, it is often claimed that Scotland is a more social democratic country than England, where the Conservative Party is still strong. Some commentators writing in the Scottish press have argued for much greater equality of income, as is typical of Scandinavia, or at least for a more caring society with much greater attention paid to the deprived areas in the cities and to those whose prospects of employment and a decent income are poor. This is a type of society that I strongly favour myself but we do not know what the Scottish electorate as a whole would vote for. Doing more for the less well-off would involve a greater tax burden for the better-off, and the recent Scottish Social Attitudes Survey, despite finding that a majority favoured greater devolution of welfare benefits, did not suggest that there was an appetite among the general public for this type of redistribution. But Scandinavian levels of welfare support would inevitably require Scandinavian levels of taxation. It would also require us to be as competitive as the Scandinavians are in global markets. **Where Should Responsibility for Welfare Policy Lie?** All of this means that where the responsibility for welfare lies is a major issue in the constitutional debate and likely to become an increased focus of attention. Not only is it a very important area of policy and a major part of public expenditure but it might also be expected that the different balance of politics in Scotland should be reflected in policy choices. It is necessary, therefore, to consider what scope for policy choice could be available to a Scottish government both with independence and greater devolution. With independence, the Scottish government would have complete control of welfare policy, along with all its other responsibilities. But even that does not mean it could act without considering what was happening in the rest of the UK. Scotland is so integrated with the other parts of the UK economy that movement of population across the border would always be very easy. This would mean that, if Scotland was more heavily taxed to an extent that was significant, there would be some businesses and people who might move to England; the opposite tendency would be apparent if Scottish taxes were lower than in the remainder of UK. Differences in welfare provision, if substantial, might also encourage benefit migration. Some people have apparently told those carrying out surveys that they would vote for independence if it made them £500 better off. One should take all this with a pinch of salt. There are some differences in tax rates and welfare provision between Swiss cantons that apparently do not have a huge effect. But the degree of Scotland's integration with the rest of the UK, which would remain even with independence, would certainly have some restraining effect on the scope for independence in policy. Under various forms of devolution, the issue becomes more complicated. Those who would like to see the whole of welfare expenditure devolved need to consider whether it would acceptable, not only in Scotland but in the other parts of the UK, for different systems to be applied on each side of the border, while still remaining within one state. Would it be acceptable in Scotland and the rest of the UK if the State Pension and the various benefits for people who are unemployed or have low incomes were at different rates? The response to surveys does not favour that and there would be many who would be strongly against it. Since these are financed at least in part by national insurance, which is at the same rate in Scotland as elsewhere in the UK, people would argue that benefits should also be the same. Although the main responsibility for welfare rests with the UK government at present, the Scottish government is not powerless in this area. Free prescriptions and care for the elderly provide significant welfare benefits. So too does the provision of social housing, action to improve Scotland's areas of acute deprivation and the provision of skill training to help people into jobs. The UK government's welfare reform will result in responsibility for Council Tax Benefit being devolved. The European Agricultural Policy also provides social benefits, albeit in a way that is not effectively targeted at the poorer members of the farming community. There are other parts of the welfare programme that would seem capable of being devolved: Housing Benefit is to be part of Universal Credit but would seem an obvious candidate, as the Scottish government is responsible for social housing; other possibilities are Maternity Allowance, TV licences for those over 75 and Industrial Injuries Benefits, but these are all small. Attendance Allowance, Widow's Benefit and Carer's Allowance might make sense in view of the Scottish government's existing responsibilities in health. Perhaps Personal Independence Payment (replacing the Disability Living Allowance) and Severe Disablement Allowance should also be considered, although differences here between England and Scotland might be more difficult for some people to accept and, if extreme, could encourage benefit migration. These items, apart from Widow's Benefit, are all non-contributory and not means-tested. They cost £3.5 billion in 2011–12 or 25 per cent of the present expenditure by the Department for Work and Pensions and, if devolved, would be in addition to the relatively small amount – less than £1 billion – for which the Scottish government already has direct responsibility and the £5 billion paid by Scottish local authorities. But, if these benefits were devolved, it should be remembered that, for demographic reasons, their cost is likely to escalate faster than for the UK as a whole. Already there is a growing amount of disquiet in England about free prescriptions in Scotland, free care for the elderly and no university tuition fees for Scottish domiciled students. This is linked to the belief that Scotland gets too generous a share of funding through its block grant. How the grant is settled and whether or not it is too generous has already been discussed in Chapter 1. Differences in welfare provision that were thought to be to the advantage of the Scots would certainly aggravate this concern. If, on the other hand, the English got something that was not available in Scotland, the Scottish population would not be slow to complain. This leads straight back to how the Scottish government is funded. The only way in which substantial differences in the benefit system between Scotland and the rest of the UK could be acceptable on both sides of the border would be if they were funded by devolved financial arrangements that people on both sides accepted as fair. It is normal in federal or quasi-federal countries for the central government to at least part-fund the budgets of the component states or regions, even if they have substantial tax powers of their own, but the arrangements need to be acceptable to all parties. In Scotland's case, the more that can be financed by taxes raised in Scotland, the more readily will differences in provision be accepted. The increased taxation powers available under the Scotland Act 2012 will go some way towards this and, if those taxation powers were increased further, as suggested in Chapter 2, that would again increase flexibility. But, for the remainder, a smaller block grant from central government would have to be based on a widely accepted system of needs assessment. If Scottish people then wanted a more generous provision of welfare, they would either have to pay more tax, perhaps with a higher rate of income tax that was directly hypothecated to the higher level of benefits, or accept that other programmes would need to be cut to provide the resources. Even with the changes proposed above, the responsibility for at least the greater part of welfare expenditure including the largest item, the State Pension, would remain with the central government. This may seem unsatisfactory to many people who would want to see their government set quite different priorities from those applied by UK governments. But, under any feasible devolution scheme, however much it is adjusted to give more responsibility to Scotland, it seems inevitable that the main direction of policy on welfare has to rest with the UK. Even with independence, although in theory a Scottish government could set its priorities any way it wished, the real world would impose constraints. There would still be a need to curb public expenditure to balance the budget. And the close economic integration of Scotland with the rest of the UK, which is bound to continue even if it became less in time, would mean that significant differences in tax levels, in State Pensions and in welfare could become an issue and result in people moving to where they thought they could get the best treatment. #### 9 #### **Conclusion** Those supporting Scottish independence sometimes point out that few, if any, of the countries that have seceded from a larger state regret that decision. Certainly the Irish Republic would not want to come back into the United Kingdom, nor Iceland to Denmark or Norway to Sweden; nor probably would the countries that have left the Soviet Union want to go into union again with Russia. But, in the case of Scotland, there has been no history of exploitation or bad government, such as fuelled the drive for independence in Ireland. Scotland is a relatively well-off country and could perfectly well be independent if that is what the people choose. Its economy is certainly much stronger than Ireland's was in 1922. But that does not mean that the process of separation would be easy or painless. It would be a major upheaval with uncertain consequences. It has been the purpose of this book to try to clarify the economic options and consequences of both independence and a greater degree of devolution so that people can understand what would be involved before voting in the 2014 referendum. Scotland is among the wealthier countries in Europe with a GDP per head (excluding the North Sea) approximately equal to that of the UK and an unemployment rate very similar. Immigration has replaced net emigration and, on most measures, Scotland's economy is in a better relative position compared with the UK as a whole than thirty years ago. After more than 300 years of union, however, our economy has become very integrated with the rest of the UK. This applies especially to the capital and labour markets. We share many of the same institutions. Many Scottish families have a member or relative resident or working elsewhere in the UK. Like the rest of the UK, however, Scotland has been badly hit by the present recession. This is because, like several other countries in Europe, notably Ireland and Spain, the previous boom was fuelled throughout the UK by ever-expanding private debt, much of it associated with housing. When this came spectacularly to an end, the consequences and necessary adjustment were, and still are, extremely difficult and painful. If the Scottish people decide in the 2014 referendum that they want their country to become an independent state again, the difficult circumstances of the recession, with unsustainable budget deficits and high public debt, which, at the time of writing, is still rising, do not make it the best time to choose. The first major problem would be with the Scottish government's own finances. The SNP government argues that Scotland's deficit is smaller proportionately than that of the UK. But that depends on some key assumptions that are set out in Chapter 1: the first, that Scotland would get, as its geographical share, some 90 per cent of the North Sea oil revenues; and the second, that the UK national debt would be divided on a population basis, rather than its share of UK GDP, including GDP from the North Sea. These are both subject to negotiation, the outcome of which must remain uncertain until negotiations for independence actually take place. There is, at present, no formal division of the North Sea between England and Scotland and negotiations between the UK and other countries over their share of the continental shelf have not always been straightforward. Sometimes they are protracted and may lead to arbitration. Even if, under international rules, Scotland does get the bulk of the oil and gas revenues as expected, a decision on that could affect the share of the national debt Scotland was expected to take. And, after all that is settled, it would be up to the markets to decide what rate of interest had to be paid on Scotland's share of the national debt and on any new borrowing. Since those who own UK debt would probably not be happy with a share of it simply being transferred, the mechanism would probably involve the Scottish government having to float its own debt for the amount to be transferred and then paying the proceeds to the UK government so that the appropriate share of the UK debt can be redeemed. For many years, Scotland has had a level of public expenditure per head which has been 10 per cent or more above that of the UK. With taxation revenue per head, excluding the North Sea, about equal to that of the UK, this leaves a gap that revenue from the North Sea would be needed to fill, unless expenditure was cut sharply. Even with this, there is, at present, an unsustainable deficit, as there is also for the UK, which is why the government's programme of austerity has been necessary (although whether the UK government has got the exact balance right between austerity and growth is a matter for debate). Unfortunately, the output of both oil and gas from the North Sea is now well past its peak, although its life has been prolonged as a result of new discoveries and advances in technology. The output of both is therefore expected to continue to decline gradually. In a few years, according to a Cabinet paper leaked to _The Scotsman_ , which uses estimates from the Office for Budget Responsibility, this could result in revenue from the North Sea falling sharply and Scotland's future deficit becoming proportionately worse than that of the UK. This, however, depends not only on the output but also on the price of oil and that is extremely difficult to predict. The oil price may rise further and more discoveries may be made, offsetting some of the impact on revenue of the predicted fall in output, but this is not something to be relied on. The potential impact of 'fracking', the process of hydraulic fracturing for oil and gas that has had a big impact on energy prices in America, also makes forecasting future prices exceptionally difficult. Because of both price and output variations, an additional problem is that the revenues have been very volatile, even in the last three years during which they have varied between £12.9 billion in 2008–09, down to £6.5 billion in 2009–10 and up again to £11.3 billion in 2011–12. This could make a future Scottish budget difficult to manage. Alex Salmond has said that, as soon as circumstances allow, an independent Scottish government would put part of the oil revenues into a special fund, as Norway has done. I welcome that. It was a great economic opportunity missed that successive UK governments have not done this over the last 30 years. It would have transformed the UK's financial position. But, in the immediate future, Scotland would be unable to afford it. If Scotland becomes independent and if oil revenues are to be put into a special fund, its public expenditure will have to be paid for from non-oil tax revenue, which would be insufficient to cover it. Despite many assertions that Scottish control of economic levers would result in higher economic growth to pay for this, no one has really explained how that is to be achieved. Without it, cuts in expenditure would be necessary. A large proportion of the North Sea hydrocarbon resources are in waters off the Northern Isles. Shetland and Orkney both had the foresight to set up oil funds from which they have got considerable benefit. They are concerned now about a tendency towards centralisation of Holyrood governments and issues have been raised by their MSPs about their constitutional status within Scotland. The Scottish government would be well advised to discuss this with them. If, on the other hand, Scotland remains in the UK, the British government is no more likely to put oil revenues into a special fund than its predecessors. I expect total Scottish public expenditure would eventually have to be brought to a level justified by a proper needs assessment so that it can be defended against criticism from other parts of the UK. But, if, as many experts expect, that did require a significant adjustment, it should be planned over a long period. If Scotland becomes independent, the choice of currency is of crucial importance. Whereas the Scottish government was previously in favour of joining the euro, it now wishes to retain sterling. There would need to be negotiations with the rest of the UK to determine on what basis that would be acceptable. If agreement was reached, all decisions on monetary policy would remain with the Bank of England. Scotland might hope to have some influence on such decisions but, whether there was formal involvement or not, the Bank of England would be bound to set its policy primarily to meet the needs of the remainder of the UK since it would comprise over 90 per cent of the combined economy. Furthermore, the problems in the eurozone have illustrated the difficulty of running a monetary union without close coordination of fiscal policies. It is therefore to be expected that the price Scotland would have to pay for a sterling monetary union would be control by the rest of the UK over its fiscal policy. This would involve not only the size of any budget deficit Scotland might have but probably also the need for some tax rates to be agreed – corporation tax would be the most likely – to avoid unfair competition. Whether the monetary union would last, however, would, in the end, depend on the view taken by the markets, as the break-up of the Czech and Slovak monetary union after less than six weeks (referred to in Chapter 3) clearly illustrated. After independence, I would expect Scotland and the rest of the UK to diverge gradually in the policies they followed. Whether the SNP or Labour was dominant in Scotland, the balance in Scottish politics would be likely to be Social Democratic, whereas the UK, especially without Scottish members in Parliament, would be more likely to be Conservative. Scotland may wish to become a bit more like Scandinavia, with its comprehensive welfare system and relatively egalitarian society, whereas the rest of the UK, and England especially, might put more emphasis on a low tax, low public expenditure and free market economy, like the United States. If that happens, it could put severe strain on a monetary union. For these reasons, while I think it would be sensible for an independent Scotland to remain with sterling at least initially, it might prove difficult in the long run; and, to gain freedom to follow its own policies, it might be necessary for Scotland to have its own currency. This could be pegged either to sterling or the euro but, in a serious crisis, to avoid the kinds of stresses that we have seen in the eurozone, the exchange rate could be adjusted. Any risk of exchange rate adjustment, however, would, of course, be reflected in the interest rates the market would demand on Scottish government bonds. Because the financial sector is so important to Scotland, the government after independence would have to think carefully about how it should be handled. The collapse of Scotland's two largest banks was a disaster keenly felt by many Scots, who regarded them as part of what made Scotland distinctive. To many, they had been a source of some pride. If Scotland had been independent at the time, I believe that their problems would probably have overwhelmed the country's finances, just as the insolvency of the Irish banks did in Ireland. It is important to learn from that experience so that, if Scotland does become independent, policies are in place to ensure that it could not happen. That means not only tight regulation and not having institutions that are too big to fail but also ensuring that Scottish-based banks and other financial institutions trading outside Scotland do so through subsidiaries, rather than branches, so that they are subject to the regulations and the deposit insurance scheme of the country where they operate. Keeping sterling as the currency would probably help Scottish financial companies because, if there was to be a separate currency, some of those whose main client base was not in Scotland, such as Standard Life, might wonder if they would be better to base their activities south of the border. There is, however, no reason why a small country cannot have a flourishing financial sector – Switzerland and Luxembourg are examples – but only the companies can say what they would do and their needs must taken into account in government policy. It is clear that there would need to be negotiations for Scotland to become a full member of the European Union in its own right. The key to a successful outcome would be the goodwill of all the 27 member states and any others, such as Croatia, that might join before negotiations started. If there was such goodwill, it might be possible, as Sir David Edward has argued, for this to be done by Treaty amendment rather than by the full process of an Accession Treaty. It might also be reasonable to expect the other member states to agree conditions that would include opt-outs from the Schengen Agreement and the euro. But any one state could exercise a veto and the risk is that a country such as Spain, worried about secession movements in its own territory, might do so to avoid a precedent being set. I see no prospect of Scotland being able to retain a share of the UK rebate. Circumstances have changed since the UK rebate was originally negotiated and most countries would like to see it ended. Scotland would get substantial payments both from the Common Agricultural Policy and the Structural Funds and, although it would be making a net contribution to the EU budget, it would probably be no higher per head than that of several other countries. The Scottish referendum will take place before the UK referendum on continued membership of the EU that the prime minister has promised if his party gains an outright majority after the next election. This commitment appears to be partly a consequence of a growing English nationalism, the most obvious manifestation of which is the rise in support for the United Kingdom Independence Party (UKIP); but there is also irritation with the rules and directives, many of which are associated with the single market. Encouraged by a strongly euro-sceptic press, opinion, especially in the south of England and in parts of the Conservative Party, is now actively hostile to the EU. That creates a difficulty for Scotland. If Scotland becomes independent and negotiates to stay in the EU but the rest of the UK then votes to leave, that could mean border posts at Gretna, Carter Bar and Berwick. The present expectation of those who wish to leave the EU, however, is that they could maintain a free-trading relationship and be within the single market, perhaps as a member of the European Economic Area (EEA). That may not be as straightforward as they assume. The other possibility is that, if Scotland votes to stay in the UK but then in the UK referendum also votes for continued EU membership while England votes to leave, it could create a difficult political situation and be a source of tension between the two governments. Membership of the EU is very important for Scotland because so many inward investing companies have chosen it as a base from which to serve the European market. If Scotland was outside the EU, it would be more difficult to attract inward investment and, depending on what agreement was reached on access to the single market, some of those already in Scotland might leave. Trade negotiations with countries outside the EU are conducted by the European Commission on behalf of all the member states. This is a major benefit. In a world that is becoming increasingly dominated by large powers such as Brazil, Russia, India and China (known as 'the BRIC countries'), as well as the United States, small European countries acting on their own would have very little clout in such negotiations. So the need for inward investment, unrestricted access to the EU single market and influence in trade negotiations all make membership of the EU of great importance to Scotland. If, therefore, it seems increasingly likely that the UK will leave the EU, the logical consequence could be an increase in support for independence. Even excluding North Sea oil, Scotland has energy resources that many other European countries would envy. Output of renewable energy is increasing and will supply a growing proportion of our electricity. But the visual impact of more and more wind turbines on the landscape is encountering ever-stronger opposition. Apart from this, although land-based wind-power may be economic by the second half of this decade according to forecasts, that is far from the case with offshore wind power or with wave or tidal power. More than a quarter of the electricity generated in Scotland is exported via interconnectors to England and Northern Ireland; and the subsidy for renewable energy, regardless of where it comes from, is paid by consumers throughout Britain. That is a cost which Scottish consumers alone would find excessive and an independent Scotland probably could not afford. Whether the rest of the UK would be prepared to continue paying for it would depend on how seriously the UK government regarded its commitments to reduce carbon emissions and on whether it was possible to get cheaper supplies from elsewhere. This might be either via more investment in other parts of the UK or through the interconnectors with continental Europe. Welfare is the biggest item of public expenditure in Scotland and responsibility for the bulk of it is not devolved. Out of a total expenditure of £21.3 billion, the Scottish government is responsible at present for rather less than £1 billion and Scottish local authorities about £5 billion, leaving the remainder, including expenditure by the Department for Work and Pensions and tax credits, as the responsibility of the UK government. If Scotland became independent, it would simply take all of this over and the amount it decided to spend would depend on its priorities and what it could afford. The various proposals for devolution, however, would leave the main responsibility for welfare with the UK. The reform that is currently taking place to the UK welfare system was badly needed because of its bewildering complexity and in order to reduce the poverty trap. However, it has come at a time when the government is trying desperately to cut what it spends because of its budget deficit, and its effects are already causing much anguish, especially from the poor and disabled. This looks likely to become a major issue for the government, provoking not only criticism but outright hostility and unpopularity. It could become one of the most important issues by the time of the referendum. Scotland at present gets 7 per cent more than its population share of the UK's welfare expenditure. Since the same system is operating throughout the country, this difference is mainly explained by Scotland's demography – in particular the larger proportion of elderly people. The latter factor results in higher expenditure on the State Pension and it is also the main cause of higher payments of Disability Living Allowance. The proportion of elderly people in the population is rising in all advanced countries, causing expenditure on State Pensions and on various welfare benefits to rise; but the expectation is that this will be more marked in Scotland than in the UK as a whole. This is because the proportion of dependent population is rising more rapidly north of the Border and total population growth is slower there than in the parts of the UK where there has been a large flow of immigrants. None of the proposals for further devolution, other than Devo-Max, advocate devolving responsibility for the State Pension or benefits for those out of work. The reason is a view that different rates on these programmes would be unacceptable. However, I argued in Chapter 8 that responsibility for some benefits, amounting to about 25 per cent (£3.5 billion) of expenditure by the UK Department for Work and Pensions, might be considered for transfer to the Scottish government, although it should be borne in mind that, because of Scotland's demography, this latter could be a burden that rises more rapidly than for the UK as a whole. Furthermore, if such a transfer is not to play into the hands of people in other parts of the UK who already think Scotland's block grant is too generous, it would need to be accompanied by increased responsibility for taxation. In Chapter 2, I argued that three quarters, though not all, of income tax might be devolved in addition to the smaller taxes in the 2012 Act and also some of those referred to by the Campbell Committee, as explained in Chapter 2. I also suggested that most of the proceeds of VAT could be assigned but not devolved (a possibility mentioned in the Calman report); this is because EU rules do not allow different rates within one member state. Many people regard assignment of revenues, where rates cannot be altered, as pointless. But it would make it clear to those elsewhere in the UK that a much higher proportion of public expenditure – over half, including the additional expenditure on welfare – was paid for by taxes raised in Scotland. Such an arrangement would leave Scotland more exposed to fluctuations in revenue caused by changing economic conditions and it might therefore be necessary to increase the Scottish government's power to borrow. But it would also give the Scottish government the benefit of the growth of its economy and encourage the adoption of policies to achieve that. The block grant would be much smaller but, to avoid it still becoming an issue of contention, there would need to be a plan to move it gradually, when circumstances made it possible, on to a system based on an agreed assessment of needs. So there is a lot to think about if Scotland becomes independent – many of the unknowns would depend on the outcome of negotiation. Some of these – the choice of currency in particular – are of enormous importance. Much could go wrong and it is impossible at this stage to know whether the added flexibility in policy that independence would bring would make Scotland stronger in the long run, or whether people would be worse off. Inevitably it would be a bumpy ride at first and, for many people, disillusioning till things got a chance to settle down and those responsible for government had learnt what they could and could not do. After Ireland became independent in 1922, it was a long time – at least a generation – before policies were adopted that began to bring the country to the high level of prosperity it was able to achieve and, despite the severe effects of the financial crisis, still has. I would not expect that to happen in Scotland but it would take some time. There has been a tendency in some quarters to think that North Sea oil revenues will pay for everything. That is clearly not so. At the time of writing, a majority vote for independence in the 2014 referendum looks unlikely. But a lot can happen in 18 months and the UK coalition government, as it struggles with reining in its budget deficit, may become increasingly unpopular. If independence is rejected, however, there is a real danger that politicians at Westminster and officials in Whitehall may think that they can put away the files and not worry about Scotland any more. Proposals for increased devolution might then be shelved. That is quite a likely outcome but it would be a huge mistake. It would probably mean that the next time there was a big surge in support for independence in Scotland, maybe in ten or twenty years' time, it would carry the day in a second referendum. That has been the pattern in the past over devolution. The 1970s devolution referendum was inconclusive but twenty years later, after discontent with Scotland's constitutional arrangements had been ignored, there was a clear majority in favour. If Scotland is to stay in the UK in the longer term, something must be done to meet the aspirations of those who do not vote for independence but want a greater degree of devolution. Some of what has been suggested, in Devo-Max for example, seems to me to be incompatible with remaining within the UK. But there are things that could be done to strengthen devolution. Entrenching the Scottish Parliament, so that it could not be abolished on a whim of Westminster and making it sovereign in those matters it controls, was the suggestion of the Devo-Plus group and also of the Campbell Committee. As suggested above, more taxation powers than those resulting from the 2012 Act might be devolved and, with them, a part of responsibility for welfare policy. But I suspect that some at least of the discontent that has led to a desire for independence, or more devolution, stems from a general undefined feeling of injustice as a result of the increasing dominance of London both politically and economically in the United Kingdom. Specifically, there has been uneasiness that the dominance of the London financial sector has been accompanied by the decline of industries elsewhere. This feeling is probably even stronger in parts of the north of England and Wales, regions that have done less well economically than Scotland. It points to an urgent need to rebalance the British economy, both geographically and in its structure, so that more emphasis is placed again on manufacturing and less on banking and finance. The policy of trying to promote growth in the regions outside London, on which much emphasis was laid in the 1960s and 1970s, was greatly weakened in the 1980s, when regional development policy was regarded as no longer fitting with the then government's free market philosophy. At the same time, deregulation, including the so-called 'Big Bang', removed most of the previous restrictions on the financial sector. The financial sector has contributed much to the economy in both employment and tax revenue but, in 2008, it nearly brought the country to ruin and we are still suffering from the effects. It is time to alter the balance for the sake of all parts of the United Kingdom. #### **Notes** **Chapter 1 – How Well Off Are We?** . Scottish Executive, _Scottish Abstract of Statistics_ , No 5 (Edinburgh, 1975). . The Scottish Office, _Scottish Economic Bulletin_ (various years). . _Scottish Abstract of Statistics_ (various years). For estimates of GDP per head in 1960 and throughout the 1950s, see my _Scotland's Economic Progress 1951–1960_ (London, George Allen and Unwin, 1965), pp. 32–6. . As reported on 20 March 2013. . OECD statistics of GDP per head in purchasing power parity. Data may also be obtained from IMF and the World Bank, which include more countries. . Scottish Government, _Government Expenditure and Revenue Scotland 2011–12_ (March 2013). . Alex Kemp, _The Official History of North Sea Oil and Gas_ (Abingdon, Routledge, 2012). . Commission on Devolution in Wales (Silk Commission), _Empowerment and Responsibility: Financial Powers to Strengthen Wales_ (Cardiff, November 2012); NI Department of Finance & Personnel, _Northern Ireland Net Fiscal Balance Report 2009–10 and 2010–11_ (Bangor, County Down, November 2012). . Scottish Government, op. cit., p. 45. . I analysed this subject in detail in 'Scotland's Public Finances from Goschen to Barnett', _Fraser of Allander Institute Quarterly Economic Commentary_ , Vol. 24, No. 2 (March 1999). . _Central Scotland: A programme for Development and Growth_ , Cmnd 2188 (London, HMSO, November 1963) and _The North-East: A Programme for Development and Growth_ , Cmnd 2206 (London, HMSO, November 1963). . Scottish Government, _Government Expenditure and Revenue Scotland 2011–2012_ (Edinburgh, March 2013). . Scottish Government, op. cit., p. 40. . Commission on Scottish Devolution (Calman Commission), _Serving Scotland Better: Scotland and the United Kingdom in the 21st Century_ (June 2009). Independent Commission on Funding and Finance for Wales (Holtham Commission), _Fairness and Accountability: A New Funding Settlement for Wales_ (Cardiff, 2010). . Oral evidence taken before the committee on 17 April 2012. . Scottish Government, _A National Conversation – Your Scotland, Your Voice_ (November 2009), p. 38 ff. **Chapter 2 – Devo-Max, Devo-Plus and the Status Quo** . HM Government, _Strengthening Scotland's Future_ , Cm 7973, TSO (2012). . Commission on Scottish Devolution, Final Report, _Serving Scotland Better: Scotland and the United Kingdom in the 21st Century_ (June 2009). . _Strengthening Scotland's Future_ , op. cit., p. 23. . Ibid., p. 25. . Ibid., p. 23. . _Strengthening Scotland's Future_ , op. cit., pp. 36–40. . Report of a committee under the chairmanship of Sir Menzies Campbell, 'Federalism: the Best Future for Scotland' (Scottish Liberal Democrats, 2009). . _Your Scotland, Your Voice_ , op. cit. . Available on the website of the David Hume Institute. . Ibid. . Andrew Hughes Hallett and Drew Scott, _Scotland a New Fiscal Settlement_ , GMU School of Public Policy Research Paper No. 2010–15 (3 June 2010). . Reform Scotland, _A New Union_ (Edinburgh, Third Report of the Devo-Plus Group, 2012). . Scottish Liberal Democrats, op. cit. . Alan Trench, _Devo-More: Fiscal Options for Strengthening the Union_ (IPPR, January 2013). . Scottish Government, _Fiscal Autonomy in Scotland_ (Edinburgh, 2009). . Independent Commission on Funding and Finance for Wales, Fairness and Accountability: A New Funding Settlement for Wales (July 2010). **Chapter 3 – The Scope for an Independent Economic Policy** . Scottish Government, _Corporation Tax: Discussion Paper. Options for Reform_ (August 2011). . _Government Expenditure and Revenue Scotland 2011–2012_ , Scottish Government (Edinburgh, March 2013). . Scottish Government, Fiscal Commission Working Group, _First Report – Macroeconomic Framework_ (Edinburgh, 2013). . _The Irish Times_ (9 August 1938). . Conor McCabe, _Sins of the Father: Tracing the Decisions that Shaped the Irish Economy_ (Dublin, The History of Ireland Press, 2011). . Fiscal Commission, op. cit. . Reported in _The Scotsman_ on 21 February 2013. . Ray Perman, _Hubris: How HBOS Wrecked the Best Bank in Britain_ (Edinburgh, Birlinn Ltd, 2012). . IMF, _World Economic Outlook_ (October 2010). . IMF, _Fiscal Monitor Update_ (July 2012). . Dawn Holland and Jonathan Portes, 'Self-defeating Austerity', _National Institute Economic Review_ , No. 222 (October 2012). **Chapter 4 – Scotland and Europe** . John Kerr (Lord Kerr of Kinlochard), 'Don't Count on It: Scotland if independent could not assume that rejoining the EU would be easy – or cheap', _Prospect Magazine_ (23 January 2013). See also a fuller version on the Scottish Constitutional Forum Blog (30 January 2013). . Sir David Edward's view is posted on the Scottish Constitutional Forum Blog (17 December 2012). . HM Government, _Scotland Analysis: Devolution and the Implications of Scottish Independence_ , Cm 8554, (February 2013). . European Commission, _EU Budget 2011: Financial Report_ (Brussels, 2012). . HM Treasury, _European Union Finances 2012_ , Cm 8405 (July 2012). . European Commission, op. cit. . Committee of Inquiry into _The Future of Scotland's Hills and Islands_ (Royal Society of Edinburgh, September 2008). . See the report of the Royal Society of Edinburgh Inquiry which I chaired, _The Future for Scotland's Hills and Islands_ (Edinburgh, 2008). . The problems of fisheries policy were thoroughly analysed in the report of the Royal Society of Edinburgh's _Inquiry into the Future of the Scottish Fishing Industry_ (March 2004). . Many of these issues were discussed in the Royal Society of Edinburgh's _Inquiry into the Future of the Scottish Fishing Industry_ , of which I was vice chairman (March 2004). . See, for instance, Jean-Claude Piris's excellent book, _The Future of Europe_ (Cambridge, Cambridge University Press, 2012). **Chapter 5 – Could an Independent Scotland Have Handled the Failure of the Banks?** . Ray Perman, _Hubris: How HBOS Wrecked the Best Bank in Britain_ , op. cit. . Robert Peston and Laurence Knight, _How Do We Fix This Mess?_ (London, Hodder & Stoughton, 2012). . His readiness to lend is well set out in Robert Peston's _Who Runs Britain?_ (London, Hodder and Stoughton, 2008). . Roger Boyes, _Meltdown Iceland_ (London, Bloomsbury, 2009). . David J. Lynch, _When the Luck of the Irish Ran Out_ (New York, Palgrave Macmillan, 2010) and Conor McCabe, _Sins of the Father_ , especially Chapter 5. . Final Report of the Independent Commission on Banking, Chairman Sir John Vickers, 12 September 2011. . Peston and Knight, _How Do We Fix This Mess?_ . The details and comparisons are fully set out in my book with Mark Stephens, _Housing Policy in Britain and Europe_ (London, UCL Press, 1995). **Chapter 6 – Scotland's Energy Future** . Scottish Environmental Protection Agency, _State of Scotland's Environment 2006_ (Stirling, October 2006). . www.sepa.org.uk, _A Climate Change Plan._ . Lord Nicholas Stern, _TheEconomics of Climate Change: The Stern Review_ (Cambridge, Cambridge University Press, 2007). . Scottish Government, _Energy in Scotland: A Compendium of Energy Statistics_ (May, 2012). . Royal Society of Edinburgh, _Inquiry into Energy Issues for Scotland_ (June 2006). . Department of Energy and Climate Change, _Energy Trends_ (December 2012). . House of Commons Select Committee on Energy and Climate Change, oral evidence from Dr David Kennedy, Chief Executive Committee on Climate Change (19 November 2012). . Scottish Hydropower Resource Study (2008). . Reported in _The Scotsman_ , 10 January 2013. . International Energy Agency, _World Energy Outlook 2012_. . House of Commons Select Committee on Energy and Climate Change, Mr Francis Egan of Cuadrilla Resources in oral evidence (11 December 2012). **Chapter 7 – North Sea Oil – the Mishandling of an Opportunity** . Scottish Government, _Government Expenditure and Revenue Scotland 2011–2012_ (Edinburgh, March 2013). . This is based on the median line in the North Sea between England and Scotland used in estimates made by Alex Kemp and Linda Stephen of Aberdeen University. Scotland's share depends not only on how the line is drawn but on the price of oil and the output of particular fields in any one year. See Alex Kemp's memorandum submitted to the House of Commons Select Committee on Energy and Climate Change, Session 2011–2013. . For example, _The Observer_ on two successive Sundays in February 1974 in the run-up to the election. . Alex Kemp, _The Official History of North Sea Oil and Gas_. . Ibid. and House of Commons Select Committee on Energy and Climate Change, oral evidence taken on 17 April, 2012. . Based on OECD figures for GDP (see Chapter 1). . Unlike my earlier 1974 paper, this one has not been made public. . Alex Kemp, _The Official History of North Sea Oil and Gas_. . Ibid., Vol. 1, pp. 584–95. . As reported in _The Scotsman_ , 28 February 2013. . _The Scotsman_ (16 January 2013). **Chapter 8 – Welfare andInequality** . John Curtice and Rachel Ormiston, _Attitudes towards Scotland's Constitutional Future_ , Scottish Social Attitudes Survey (ScotCen, January 2013). . Professor Bell's paper is available on the David Hume Institute website. . Department for Work and Pensions, _Universal Credit: Impact Assessment (1A)_ (December 2012). . Jim McCormick's report, 'Welfare "Reform" and Mitigation in Scotland', for the Scottish Council of Voluntary Organisations, (January 2013). . David Bell, op. cit. . Hannah Aldridge, Peter Kenway and Tom MacInnes, 'Monitoring Poverty and Social Exclusion Scotland 2013' (Joseph Rowntree Foundation, 2013). . Jonathan Cribb, Robert Joyce and David Phillips, 'Living Standards, Poverty and Inequality in the UK' (Joseph Rowntree Foundation for the Institute for Fiscal Studies, 2012). . Ibid. . Lesley Riddoch writing in _The Scotsman_ has argued for the Scandinavian model and Joyce McMillan for a more equal society. . See, for example, Alan Trench, 'Funding Devo-More: Fiscal Options for Strengthening the Union' (IPPR, January 2013). **Chapter 9 – Conclusion** . Reported in _The Scotsman_ , 7 March 2013. . See John Kay's article in _The Scotsman_ , (7 March 2013) and his excellent chapter in _Scotland's Future: the Economics of Constitutional Change_ (Dundee, Dundee University Press, 2013). . Commission on Scottish Devolution, _Serving Scotland Better_ (Edinburgh, 2009), p. 97. #### **Index** Abbey National ref 1 ABN AMRO ref 1, ref 2 Act of Union 1707 xiii, ref 1, ref 2 Aggregates, tax on ref 1, ref 2 Air Passenger Duty ref 1, ref 2 Attendance Allowance ref 1, ref 2, ref 3 Austerity polices ref 1 Basque Country & Navarra ref 1 Banking crisis ref 1, ref 2, ref 3 Banking Regulator ref 1 Banking subsidiaries ref 1 Bank of Scotland ref 1, ref 2 merger with Halifax ref 1 Basel Agreements ref 1 Bank of England ref 1, ref 2, ref 3 Monetary Policy Commitee of ref 1 Barnett Formula ref 1, ref 2, ref 3, ref 4 Barroso, Jose Manuel ref 1 Bathgate ref 1 Belgium ref 1, ref 2, ref 3 rescue of ABM AMRO ref 1 Bell, David xiv, ref 1, ref 2 'Big Bang' ref 1, ref 2 Block grant ref 1, ref 2, ref 3 Bloomberg New Energy Finance ref 1 Borrowing powers ref 1, ref 2, ref 3 BP – Clair Ridge project ref 1, ref 2 Business Rates ref 1, ref 2, ref 3 Cadbury's ref 1 Calman Commission ref 1, ref 2, ref 3, ref 4, ref 5 Cameron, David ref 1 Campbell Committee ref 1, ref 2, ref 3, ref 4 Catalonia ref 1, ref 2 Carbon capture and storage ref 1 Collateralised Debt Obligations ref 1 Channel Islands ref 1 Clark, Ian ref 1 Climate change ref 1, ref 2 Coal production ref 1 Coal fired power stations ref 1 Cockfield, Lord ref 1 Colino, Cesar ref 1 Common Agricultural Policy ref 1, ref 2, ref 3 Common Fisheries Policy ref 1 Commonwealth ref 1 Copenhagen criteria ref 1 Corporation tax ref 1, ref 2, ref 3 Council Tax ref 1, ref 2, ref 3 Council Tax Benefit ref 1 Crawford, James and Boyle, Alan ref 1 Croatia ref 1, ref 2 Crosby, James ref 1, ref 2 Cuadrilla Resources ref 1 Cummings, Peter ref 1 Currency ref 1, ref 2, ref 3, ref 4, ref 5, ref 6 Cyprus ref 1, ref 2 Czech Republic ref 1, ref 2, ref 3 David Hume Institute ref 1 Defence xiv, ref 1, ref 2, ref 3 Denmark ref 1, ref 2, ref 3, ref 4, ref 5 Department of Work and Pensions ref 1, ref 2, ref 3 Devolution ref 1, ref 2 Devo-Max ref 1, ref 2, ref 3, ref 4, ref 5, ref 6 Devo-Plus ref 1, ref 2 Disability Living Allowance ref 1, ref 2, ref 3, ref 4 to be replaced by Personal Independence Payment ref 1 Dublin 1916 uprising ref 1 Duncan Smith, Ian ref 1 'Dutch Disease' ref 1 Dutch Government – rescue of ABM AMRO ref 1 Education and skill training ref 1ff, 188 Edward, Sir David xiv, ref 1, ref 2 Electricity grid ref 1 interconnectors to Continent ref 1 Employment and Support Allowance ref 1 England export of electricity to ref 1 GVA by region ref 1 Northern Region ref 1, ref 2, ref 3, ref 4, ref 5 South East ref 1 Erasmus ref 1 European Union ref 1, ref 2, ref 3, ref 4, ref 5 budget ref 1, ref 2, ref 3, ref 4 European Banking Union ref 1 Eurobonds ref 1 European Central Bank ref 1, ref 2 European Commission ref 1, ref 2 European Court of Justice ref 1, ref 2, ref 3, ref 4 European Exchange rate mechanism 236, 241, 258 European Economic Area (EEA) ref 1, ref 2 European Free Trade Area (EFTA) ref 1 European Monetary union (EMU) ref 1 European Parliament 230 Eurozone ref 1, ref 2, ref 3, ref 4 Ewing, Fergus ref 1 Excise and fuel duties ref 1, ref 2 Faroe Islands ref 1 Finland ref 1, ref 2, ref 3 Financial services ref 1, ref 2 deregulation of ref 1 Fiscal policy ref 1, ref 2, ref 3 Commission ref 1 independence ref 1 multiplier ref 1 union ref 1 Fortis, rescue ref 1, ref 2 Fossil fuel targets for reduction ref 1 'Fracking' ref 1, ref 2, ref 3 France ref 1, ref 2, ref 3, ref 4, ref 5 Gas power station, Peterhead ref 1 Gross Domestic Product ref 1ff, 51, 128, 135 Germany ref 1, ref 2, ref 3, ref 4 Mittlestand ref 1 Scottish Government could learn from ref 1 Government Expenditure and Revenue Scotland ref 1 Gigha ref 1 Glass-Steagall Act ref 1 Gross National Income ref 1 contribution to EU budget ref 1 Gross National Product ref 1 Goodwin, Fred ref 1, ref 2 Greece ref 1, ref 2 Gross Value Added see GDP Halifax Building Society ref 1, ref 2, ref 3 HBoS, insolvency of ref 1, ref 2 Health ref 1 HM Revenue and Customs ref 1, ref 2 HM Treasury ref 1, ref 2, ref 3, ref 4, ref 5 Highlands & Islands Development Board/Highlands & Islands Enterprise ref 1, ref 2, ref 3 Holland, Dawn and Portes, Jeremy ref 1 Holtham Commission ref 1, ref 2, ref 3, ref 4 Hongkong & Shanghai Banking Corporation ref 1 Hornby, Andy ref 1, ref 2 House of Commons Committee on Energy and Climate Change ref 1, ref 2, ref 3, ref 4, ref 5 Housing Benefit ref 1, ref 2, ref 3, ref 4 Housing policy ref 1, ref 2, ref 3 associations ref 1 rented sector ref 1 Right to Buy ref 1 social housing ref 1, ref 2 Hughes Hallet, Andrew and Scott, Drew ref 1 Iceland ref 1, ref 2 failure of banks ref 1 proposed electricity cable to Scotland ref 1 Identifiable expenditure ref 1, ref 2 Incapacity Benefit ref 1 Income Support ref 1, ref 2 Income tax (under 2012 Act) ref 1, ref 2, ref 3 Inflation ref 1 Industrial Injuries Benefit ref 1 Institute for Fiscal Studies ref 1 Insurance Premium Tax ref 1, ref 2 International Energy Agency ref 1 International Monetary Fund ref 1, ref 2 Infrastructure investment ref 1 Inward investment ref 1, ref 2, ref 3 Institute for Public Policy Research ref 1ff Ireland ref 1, ref 2, ref 3, ref 4, ref 5, ref 6,ref 7, ref 8, ref 9, ref 10, ref 11 Central Bank Act ref 1 First and Second Banking Commissions ref 1 Government seeks bail-out ref 1 Isle of Man ref 1 Italy ref 1, ref 2, ref 3, ref 4 Jobseeker's Allowance ref 1 Joseph Rowntree Foundation ref 1 Kay, John ref 1 Kemp, Alex ref 1, ref 2, ref 3, ref 4 and Linda Stephen ref 1 Kerr, Lord John of Kinlochard ref 1 Landfill tax ref 1, ref 2 Lender of last resort ref 1, ref 2 Liechtenstein ref 1 Linwood ref 1 Lloyds TSB (takeover of HBoS) ref 1 Local authorities ref 1, ref 2, ref 3, ref 4, ref 5 Luxembourg ref 1 McArthur, Liam ref 1 McCabe, Conor ref 1 Maternity Allowance ref 1 Migration ref 1, ref 2, ref 3, ref 4, ref 5 benefit migration ref 1 Monetary policy ref 1 Monetary union ref 1, ref 2, ref 3 Mortgages, securitisation of ref 1 sub-prime ref 1 National Debt ref 1, ref 2 interest on ref 1 ratio to GDP ref 1, ref 2, ref 3 National Institute of Economic and Social Research ref 1 National Insurance ref 1 NatWest ref 1 Netherlands ref 1 Needs assessment ref 1, ref 2, ref 3, ref 4, ref 5 Non-tariff barriers ref 1 North Sea oil and gas ref 1, ref 2, ref 3, ref 4, ref 5, ref 6, ref 7, ref 8, ref 9, ref 10, ref 11, ref 12,ref 13, ref 14 decommissioning costs ref 1 exports of crude oil ref 1 forecasts of production and revenue ref 1 Northern Ireland ref 1, ref 2 exports of electricity to ref 1 Northern Rock ref 1 Norway ref 1, ref 2, ref 3, ref 4 Nordic Passport Union ref 1 oil fund ref 1, ref 2 Nuclear power stations ref 1 Office for Budget Responsibility ref 1 Oil and gas prices ref 1, ref 2, ref 3 Oil fund, the case for ref 1, ref 2, ref 3 Shetland and Orkney oil funds ref 1 Orkney ref 1, ref 2, ref 3 Osborne, George ref 1 Parliamentary Commission on Bank Standards ref 1 Peat, Jeremy xiv Pension Credit ref 1 Perman, Ray ref 1 Personal Independence Payment ref 1 Personal and nursing care, free ref 1, ref 2, ref 3, ref 4 Peston, Robert ref 1, ref 2 Poland ref 1 Population, growth of ref 1 Portugal ref 1 Poverty and inequality ref 1 trap ref 1, ref 2 Public expenditure ref 1, ref 2, ref 3, ref 4, ref 5, ref 6, ref 7 Prescriptions, free ref 1, ref 2 Purvis, Jeremy ref 1 Purvis Group ref 1 Rating agencies ref 1 Reform Scotland ref 1 Regional development policy ref 1, ref 2, ref 3 Renewable energy ref 1, ref 2 Renewable Obligation (Scotland) ref 1, ref 2 Royal Society of Edinburgh Inquiry into Energy Issues for Scotland ref 1 Inquiry into the Future of Scotland's Fishing Industry ref 1 Royal Bank of Scotland ref 1, ref 2, rescue of ref 1 Santander ref 1, ref 2 Saudi Arabia, oil production of ref 1 Schengen Agreement ref 1, ref 2 Scotland Act 1998 ref 1 Scotland Act 2012 ref 1, ref 2, ref 3, ref 4 Scotland's electricity ref 1 exports of ref 1 hydro-electricity ref 1 nuclear power ref 1 power stations ref 1, ref 2 pumped storage ref 1, ref 2 wind power ref 1 Scottish and Southern Electricity ref 1, ref 2 Scottish Council of Voluntary Organisations ref 1 Scottish Development Agency/Scottish Enterprise ref 1, ref 2, ref 3 Scottish Environmental Protection Agency ref 1 Scottish Government Fiscal Commission ref 1 Scott, Tavish ref 1 Scottish Parliament ref 1, ref 2, ref 3, ref 4 Scottish Power ref 1 Scottish Social Attitudes Survey ref 1, ref 2 Shell ref 1 Shetland ref 1, ref 2 Charitable Trust ref 1, ref 2 PURE project ref 1 under-sea cable from ref 1 Silk Commission ref 1 Single market ref 1, ref 2, ref 3 Slovakia ref 1, ref 2, ref 3 Social protection ref 1, ref 2, ref 3, ref 4, ref 5, ref 6, ref 7 Spain ref 1, ref 2, ref 3, ref 4, ref 5, ref 6 Stamp Duty Land Tax ref 1, ref 2 Standard Chartered Bank ref 1 State pension ref 1, ref 2, ref 3, ref 4 Sterling exchange rate ref 1 Sterling monetary union ref 1 Stern, Lord Nicholas – review ref 1 Stiglitz, Joe ref 1 Sullom Voe ref 1 Sweden ref 1, ref 2, ref 3, ref 4 Switzerland ref 1, ref 2, ref 3 Tebbit, Lord Norman ref 1 Thatcher, Lady Margaret ref 1, ref 2 Treaty of Accession ref 1, ref 2, ref 3 Treaty of Lisbon ref 1 Trench, Alan ref 1, ref 2 Trump, Donald ref 1 Unemployment – peaks at 14% ref 1 United States of America ref 1, ref 2, ref 3, ref 4, ref 5 Universal Credit ref 1, ref 2 University tuition fees ref 1, ref 2 Value Added Tax ref 1, ref 2, ref 3, ref 4 Vickers Commission ref 1 Viking Energy ref 1 Wales ref 1, ref 2, ref 3, ref 4, ref 5 Welfare, see social protection Western Isles ref 1 Widow's Benefit ref 1 Windfarms ref 1, ref 2, ref 3 Wilson, Brian ref 1 #### **Endnotes** . GDP relates to output, including sales taxes but not any subsidies, whereas GVA is output excluding indirect taxes but including any subsidies. As indirect taxes are more important than subsidies, GDP figures for Scotland are somewhat higher than GVA. Official statistics now most frequently use GVA whereas, in earlier years, only GDP was available. The reader may find this confusing – some of the comparisons are made in GDP and some in GVA but that is how they are published by the government statisticians. . This effect was, of course, compounded by the very tight monetary policy adopted by the UK government in the early 1980s. . GNI is net of income paid and received from abroad. This is important as, although it has never been calculated for Scotland, it would certainly be lower than GDP, if GDP included Scotland's geographical share of the North Sea and the profits of all overseas companies operating both offshore and onshore in Scotland. . Iceland has applied for full membership of the EU. But in view of the importance to it of its fishing industry and its experience in the financial crisis, there are important issues to be considered. . landfill gas, sewage gas, other bioenergy . excluding pumped storage . A term used in the industry, 'oil equivalent' means oil plus natural gas liquids all converted to the equivalent in oil. . This is known as the 'government pension fund – global' – _Statens pensjonsfond – Utland_ or SPU. . Identifiable expenditure excludes defence, foreign embassies, interest on the National Debt and other items that are costs for the UK as a whole and cannot be allocated to a particular part of the UK. . excludes international services, defence and debt interest . including enterprise and economic development, agriculture, forestry, fishing, employment policies, science and technology and transport . excludes tax credits
{ "redpajama_set_name": "RedPajamaBook" }
6,187
{"url":"https:\/\/mathoverflow.net\/questions\/289560\/what-is-an-instanton-in-classical-gauge-theory-to-a-mathematician","text":"# What is an \u201cInstanton\u201d in classical gauge theory? (to a mathematician)\n\nThere's already a question about the same topic but I think its aim is different.\n\nClassical (non-quantum) gauge theory is a completely rigorous mathematical theory. It can be phrased in completely differential-geometric terms (where the main players are bundle with connections on a manifold).\n\nI think I have a basic understanding of what gauge theory is about and what various words mean in this context (yang mils, potential, energy, etc...). However I have still not managed to figure out what \"Instanton\" means in this context.\n\nWhat is an Instanton?\n\nIs it something special to Yang-Mils theory? Is it something special to Quantum Gauge theory? Are there any mathematical interpretations\/applications for Instantons?\n\nA linguistic remark: \"Instantons\" are the same mathematically to \"solitons\", particle-like solutions of classical field theories (explaining the suffix \"on\"). Unlike solitons, instantons are structures in time (explaining the prefix \"instant\").\n\nA mathematical remark (using Donaldson's book on Yang-Mills Floer homology, appendix C of section 2.8), which supplements Igor Khavkine's answer:\n\nConsider the Yang-Mills equations over the (3+1)-dimensional spacetime $Y\\times\\mathbb{R}$, using the Lorentzian metric $dy^2-dt^2$ (this is the \"real life\" physical picture). Yang-Mills solutions are solutions to the Euler-Lagrange equations $d_A^\\ast F_A=0$ of the Yang-Mills functional $\\int_{Y\\times\\mathbb{R}}(|E|^2-|B|^2)$, where we have decomposed $F_A=\\ast B+E\\wedge dt$. These solutions can be viewed as paths $[A_t]$ in the configuration space $\\mathcal{B}_P$ of (gauge equivalence classes of) connections on the principal bundle $P\\to Y$. In this viewpoint, $B$ is the curvature of $A_t$ (on $Y$), and $E$ is the velocity vector of the path $[A_t]\\subset\\mathcal{B}_P$, and the Yang-Mills functional is thus $\\int(||\\nabla_tA_t||^2-V(A_t))dt$ with $V(A_t)=\\int_Y|F_{A_t}|^2$. That means the 4-dimensional Lorentzian Yang-Mills solutions can be regarded as the motions of a particle moving on $\\mathcal{B}_P$ in the potential $\\int_Y|F_A|^2$.\n\nHowever, instantons are Yang-Mills solutions for the Euclidean metric. In the above picture, that means we need to reverse the sign of the potential, and we lose our physical description of particles. By the way, so far we have been describing the first paragraph of Igor Khavkine's answer. Let's move on to his second paragraph:\n\nIf we are to relate instantons with a physical description of particles, then we need pass to quantum mechanics on $\\mathcal{B}_P$. We look for wavefunctions that are energy eigenstates for the potential $V=\\int_Y|F_A|^2$, i.e. solutions to Schrodinger's equation on $\\mathcal{B}_P$. Instantons will approximate these solutions. If the energies are greater than $V$, we have our usual classical picture of a ball rolling over a hill, but if the energies are less than $V$ ($E_0<V$) then we have \"quantum tunnelling\". Clarifying, the \"leading order\" approximation of Schrodinger's equation (the instantons) in these classically inaccessible regions will be given by trajectories of particle motions on $\\mathcal{B}_P$ with energy $-E_0$ in the potential $-\\int_Y|F_A|^2$.\nThis is really cool... In the toy model of a double-well potential (see Wikipedia's article on instantons), instantons are the solutions which tunnel from well to well. Mathematically that means we have a path $[A_t]\\subset\\mathcal{B}_P$ of connections which are asymptotic to flat connections on both ends (as $t\\to\\pm\\infty$), and this is the setup for Yang-Mills Floer homology!\n\n\u2022 Do you know if it is possible to formulate all this in such way (possibly involving complexification of the whole setup, I have actually no idea since I am a complete diletant here) that there will be analogy between the self-duality condition for instantons and Yang-Mills Floer homology vs harmonicity condition for differential forms and de Rham cohomology? \u2013\u00a0\u10db\u10d0\u10db\u10e3\u10d9\u10d0 \u10ef\u10d8\u10d1\u10da\u10d0\u10eb\u10d4 Dec 31 '17 at 5:18\n\nBy itself, a (Yang-Mills) instanton is a classical concept. It is a solution of the classical Yang-Mills equations (considered on a manifold with a Riemannian, rather than a Lorentzian, metric), such that the classical Yang-Mills action functional evaluated on this solution is finite (not divergent). Also, the concept of instanton is not restricted to Yang-Mills type gauge theories and applies to other kinds of field theories as well.\n\nThough classical, instantons have applications in quantum theory, at least heuristically. When the path integral (still in Riemannian\/Euclidean signature) is formally considered in the saddle point approximation, instantons are precisely the saddle points which determine the asymptotics of the approximation.\n\n\u2022 See MR0598562 Atiyah, M. F.; Hitchin, N. J.; Drinfel\u02b9d, V. G.; Manin, Yu. I. Construction of instantons. Phys. Lett. A 65 (1978), no. 3, 185\u2013187. (for mathematician). \u2013\u00a0Alexandre Eremenko Dec 30 '17 at 14:00\n\nGenerally speaking, you could say they are a special type of solution to the field equations of gauge theories. More specifically, an instanton is a classical solution in a classical Euclidean field theory with finite non-zero action.\n\nThe name is due to the fact that they happen for an 'instant' (a point) of Euclidean time and so they are important in the path-integral formulation of a theory which uses Euclidean signature and as critical points. As other commentators have mentioned, in QFT we are generally talking about the Yang-Mills instanton (where a Yang-Mills theory is a QFT with a non-abelian gauge group). For a mathematical interpretation, the instanton solution of the Euclidean Yang-Mills equation leads an $SU(2)$ fibre bundle over $S^{4}$ but it can also be proved that any finite action solution of the Euclidean Yang-Mills equations leads to a fibre bundle over the four-sphere (see Uhlenbeck, 1979).\n\nIf we consider the best-known example, we take a pure Yang-Mills theory with symmetry group $SU(2)$ in $R^{4}$ with Euclidean signature. The equations of motion (ie. the Yang-Mills equations) are $D*F=0$ and $DF=0$. When we introduce the condition called the anti-self-duality equation, these equations reduce to ODEs for the gauge potential $A$. If we make an ansatz for the solution to the anti-self-duality equation which only differs from a pure gauge by a function of $r$ at infinite radius, we guarantee that our solutions have finite, non-zero action. Our ansatz for the gauge field is such that it becomes a pure gauge as $r$ tends to infinity and the associated field strength disappears, meaning that the action is finite.\n\nIf we choose the appropriate gauge transformation and use this to evaluate the field strength, we then obtain an equation whose full-solution is the 'instanton potential' which is regular on all of $R^{4}$. The action is equal to $-8 \\pi^{2}\/g^{2}$, so it is obviously finite. See Gockeler and Schucker (1987) for more on the fibre bundle interpretation: this looks at other related fibre bundle structures such as the Dirac monopole. The fibre bundle structure of the Dirac monopole is extremely similar (basically the same) as the Yang-Mills instanton fibre bundle. (I only add this as seems like you were after a more interesting mathematical interpretation).\n\nFollowing this logic we can attempt to construct a 'gravitational instanton' ie. we look for a metric with Euclidean signature described locally by an orthonormal frame and solve the Einstein field equations without matter. We end up with a solution such that $g$ and $f$ tend to $1$ as $r$ tends to infinity: this is similar to the Yang-Mills case where the Yang-Mills instanton potential becomes a pure gauge as $r$ tends to infinity. This is similar to the way that gravity ends up being 'analogous' to other gauge theories, rather than directly comparable, since gravity does not quantize well.","date":"2019-07-19 11:28:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9306515455245972, \"perplexity\": 376.87416919862494}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-30\/segments\/1563195526210.32\/warc\/CC-MAIN-20190719095313-20190719121313-00479.warc.gz\"}"}
null
null
The very effective snap-off steering wheel . . . This is how I make the Spider impossible to steal and drive. And it's fun too when people start looking at your Alfa Romeo. No wheel, no steal! Here's an instructional video. Very smooth and elegant. The snap on lock is sold by www.Snap-Off.com in Great Britain. It was invented in Sweden by Mr. Johan Rosenlund, www.snapoff.com who still has them for sale. Johan is the inventor with a passion for racing. Also another Swedish vendor: www.janolisamotorsport.com who ships all over the globe. The anti-theft device Snap-Off system, combined with SWM hub kits and sport steering wheels, allows the easy, quick release and removal of the steering wheel when the car is parked. The wheel can then be easily and safely remounted when you drive your car again. What better deterrent can you have than totally removing your steering wheel, thus confusing and discouraging the thief and diverting it to car without the so called Snap-Off. This system can easily be assembled and servicing is not necessary, Snap-Off will also allow you to buy a sport steering from the wide range of SWM wheels. The anti theft Snap-Off is covered by the European and American patent laws. I just snap the steering wheel on when I like to drive. Once removed, it is not possible to drive away. It looks very strange on the parking lot and you will meet new friends all the time. And the Spider will remain where you left her. I use my Momo Super Indy wheel and carry it in a bag. No problems! This arrangement will fit any steering key shaft. BTW, if you have an air-bag, sorry, it won't work here. Just good old 105's will do. Using a vice-grip will be very complicated for a presumed thief. First the villain will have to spot your car, second check how to steal it. Then he has to go away to get a vice-grip. When he returns, you will probably be gone with your car. If your pearl is still there in the parking-lot, the villain will have some real hardships, driving away with the odd steering wheel. Remember, the vice-grip is not the nasty thing wrestlers use before the DSQ. It is a special tool, however not that common. If you have a Burman recirculating box, they can be adjusted as well. There's a ovalish shaped plate on top of the box, held on by two bolts. Underneath is a spring, and a stack of shims. Play is adjusted by varying the spring tension with shimmed height of the plate. Don't worry about the spring flying, it only sticks out a bit. If your box is full of oil, you might want to suction out a few ounces, since the hole is "below sea level" when full. Take off the plate, and start removing shims till you get a bit of tension, then add just the one so there isn't load on the internals. This is best done with the front end up on jacks, so the wheels can freely move and you can feel the threshold of tightening up.
{ "redpajama_set_name": "RedPajamaC4" }
952
<?xml version="1.0" encoding="UTF-8"?> <monster name="Gang Member" nameDescription="a gang member" race="blood" experience="70" speed="200" manacost="450"> <health now="295" max="295"/> <look type="151" head="114" body="19" legs="42" feet="20" corpse="20403"/> <targetchange interval="4000" chance="5"/> <flags> <flag summonable="0"/> <flag attackable="1"/> <flag hostile="1"/> <flag illusionable="0"/> <flag convinceable="1"/> <flag pushable="1"/> <flag canpushitems="1"/> <flag canpushcreatures="0"/> <flag staticattack="90"/> <flag targetdistance="1"/> <flag runonhealth="35"/> </flags> <attacks> <attack name="melee" interval="2000" min="0" max="-70"/> <attack name="physical" interval="2000" chance="15" range="7" min="0" max="-25"> <attribute key="shootEffect" value="throwingknife"/> </attack> </attacks> <defenses armor="15" defense="15"/> <elements> <element deathPercent="-5"/> </elements> <voices interval="5000" chance="10"> <voice sentence="This is our territory!"/> <voice sentence="Help me guys!"/> <voice sentence="I don't like the way you look!"/> <voice sentence="You're wearing the wrong colours!"/> <voice sentence="Don't mess with us!"/> </voices> <loot> <item id="2148" countmax="30" chance="50000"/> <!-- gold coin --> <item id="2649" chance="15000"/> <!-- leather legs --> <item id="2389" chance="8700"/> <!-- mace --> <item id="2691" chance="5000"/> <!-- brown bread --> <item id="2468" chance="5000"/> <!-- studded legs --> <item id="2209" chance="790"/> <!-- club ring --> </loot> </monster>
{ "redpajama_set_name": "RedPajamaGithub" }
9,332
<HTML><HEAD> <TITLE>Review for Leaving Las Vegas (1995)</TITLE> <LINK REL="STYLESHEET" TYPE="text/css" HREF="/ramr.css"> </HEAD> <BODY BGCOLOR="#FFFFFF" TEXT="#000000"> <H1 ALIGN="CENTER" CLASS="title"><A HREF="/Title?0113627">Leaving Las Vegas (1995)</A></H1><H3 ALIGN=CENTER>reviewed by<BR><A HREF="/ReviewsBy?Andrew+Hicks">Andrew Hicks</A></H3><HR WIDTH="40%" SIZE="4"> <PRE> LEAVING LAS VEGAS A film review by Andrew Hicks Copyright 1997 Andrew Hicks</PRE> <PRE>(1995) ***1/2 (out of four)</PRE> <P> I don't think there's been a movie since SCHINDLER'S LIST, or at least DEAD MAN WALKING, that I've been so involved in that it was nearly impossible to get through. Adapted from the John O'Brien novel, LEAVING LAS VEGAS is a portrait of depression that wraps us up in two people, making us know, care and sympathize with them, then leaves us helpless in watching them destroy themselves. This uncompromising Mike Figgis film is an incredible emotional experience that leaves out all the usual Hollywood trappings, save its two stars.</P> <P> Nicolas Cage plays a writer who has been leaving reality behind, on an escalating basis, since his wife left him. It's to the point where, from the moment he gets up in the morning until he passes out at night, he's in an alcohol-induced stupor and can't allow himself to sober up. So, when he embarrasses himself in front of some colleagues (including one of his fellow DRUNKS, Richard Lewis) and is fired from his job, he sells or burns everything he owns and heads to Las Vegas. He may not have much left, but with the Vegas gamblers, mafiosos, drunks and hookers, at least he'll be in good company.</P> <P> Cage's plan is to just keep drinking until he dies and, indeed, after one painful bank scene, we never see him sober again. Most of the time, he's guzzling down vodka or tequila like it's Evian water. In Vegas, he meets Elisabeth Shue, a street hooker he takes back to his $29 motel room. She wants to blow him, he wants to talk and it turns out she's just as lonely as he is. And she has her share of problems with abusive pimp Julian Sands slashing her butt cheeks after she returns from a night of work with less cash than required, plus it seems people just don't respect her profession. Hey, people, she works hard for the money!</P> <P> They work out a beneficially mutual relationship. She has someone to keep her warm nights and he has someone to take care of him -- feed him, clean up after him, direct him to the nearest place he can puke -- and both accept each other without trying to change them. Unfortunately, this means Shue will just have to sit by and watch as Cage's liver gets closer and closer to that of David Crosby and that Cage will have to share Shue with anyone who can cough up a couple hundred bucks, or $175 on double coupon day. But, for each of them, the other is the only caring person they know.</P> <P> Cage won an Oscar for LEAVING LAS VEGAS, and a completely deserved one at that. He's always fun in action movies and light comedies, but it's almost inconceivable that this is the same guy who starred in AMOS AND ANDREW and CON-AIR. His performance, in conveying the depression and hopelessness of his character while still making him seem human and likeable, in various states of drunkenness no less, goes far beyond anything else Cage has achieved. Shue, as the damned-if-she-do-damned-if-she-don't hooker with a heart of gold, is amazing too. This is a career reviving performance for the woman who seemed so innocent in fluff like ADVENTURES IN BABYSITTING.</P> <P> The relationship Cage and Shue have with each other is a Catch-22 in which disappointment and loss is inevitable, and once we as viewers realize this, we set ourselves up for the same frustration Shue's character must have, in caring but being unable to stop the path of destruction. LEAVING LAS VEGAS is utterly absorbing because the acting is so incredible, the characters compelling and the writing superior, but it's hard to keep watching because it seems so real and so sad. That was probably O'Brien's goal; he wasn't a happy man himself. He committed suicide shortly after selling the movie rights to LEAVING LAS VEGAS. There's probably a really sad movie in that story too.</P> <PRE>--</PRE> <P>Visit the Andrew Hicks: Movie Critic at LARGE homepage at <A HREF="http://www.missouri.edu/~c667778/movies.html">http://www.missouri.edu/~c667778/movies.html</A></P> <P>Serving The World For Nearly 1/25th of a Century!</P> <HR><P CLASS=flush><SMALL>The review above was posted to the <A HREF="news:rec.arts.movies.reviews">rec.arts.movies.reviews</A> newsgroup (<A HREF="news:de.rec.film.kritiken">de.rec.film.kritiken</A> for German reviews).<BR> The Internet Movie Database accepts no responsibility for the contents of the review and has no editorial control. Unless stated otherwise, the copyright belongs to the author.<BR> Please direct comments/criticisms of the review to relevant newsgroups.<BR> Broken URLs inthe reviews are the responsibility of the author.<BR> The formatting of the review is likely to differ from the original due to ASCII to HTML conversion. </SMALL></P> <P ALIGN=CENTER>Related links: <A HREF="/Reviews/">index of all rec.arts.movies.reviews reviews</A></P> </P></BODY></HTML>
{ "redpajama_set_name": "RedPajamaGithub" }
655
using Cassandra.DataStax.Insights.Schema.StartupMessage; using Cassandra.SessionManagement; namespace Cassandra.DataStax.Insights.InfoProviders.StartupMessage { internal class AuthProviderInfoProvider : IInsightsInfoProvider<AuthProviderInfo> { public AuthProviderInfo GetInformation(IInternalCluster cluster, IInternalSession session) { var type = cluster.Configuration.AuthProvider.GetType(); return new AuthProviderInfo { Namespace = type.Namespace, Type = type.Name }; } } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,085
Q: How does one reduce and mutate/change string entries of an array based on common substring patterns? I have an array of string items ... [ 'Mon : 9:00AM - 7:00PM', 'Tue : 9:00AM - 10:00PM', 'Wed : Closed', 'Thu : 9:00AM - 7:00PM', 'Fri : 9:00AM - 7:00PM', 'Sat : Closed', 'Sun : Closed', ] ... and I want to achieve a result like the one below ... [ 'Mon: 9:00AM - 7:00PM', 'Tue: 9:00AM - 10:00PM', 'Wed: Closed', 'Thu-Fri: 9:00AM - 7:00PM', 'Sat-Sun: Closed', ] Any help is really appreciated. A: * *Firstly one needs to separate the day value from the hours value part of a single opening hours string. * *This either can be achieved via indexOf, substring and trim ... function splitOpeningHoursEntry(entry) { // e.g.: 'Mon : 9:00AM - 7:00PM' const indexOfColon = entry.indexOf(':'); // e.g. 5 // entry.substring(0, 5) ... e.g.: 'Mon ' const day = entry.substring(0, indexOfColon); // entry.substring(6) ... e.g.: ' 9:00AM - 7:00PM' const hours = entry.substring(indexOfColon + 1); // e.g.: ['Mon', '9:00AM - 7:00PM'] return [day.trim(), hours.trim()]; } *... or it can be done by split-ting with a Regular Expression like ... /(^[^:\s]+)\s*:\s*/ ... and slice-ing the results array ... function regexSplitOpeningHoursEntry(entry) { // [https://regex101.com/r/vGRck7/3] // entry.split(':') // ["Mon ", " 9", "00AM - 7", "00PM"] // [https://regex101.com/r/vGRck7/2] // entry.split(/\s*:\s*/) // ["Mon", "9", "00AM - 7", "00PM"] // [https://regex101.com/r/vGRck7/1] // entry.split(/(^[^:\s]+)\s*:\s*/) // ["", "Mon", "9:00AM - 7:00PM"]; return entry.split(/(^[^:\s]+)\s*:\s*/).slice(1); } *Then one has to map an entire array of opening hours strings into an array of arrays, where each array-item contains the day value as first and the hours value as second array item ... either like this ... sampleList.map(splitOpeningHoursEntry); ... or like that ... sampleList.map(regexSplitOpeningHoursEntry); *On top one needs to reduce this array of splitted [<day>, <hours>] entries into its compact form ... *Finally one has to map each splitted [<day>, <hours>] entry with a concatenation task back into its human readable string form.... const sampleList = [ 'Mon : 9:00AM - 7:00PM', 'Tue : 9:00AM - 10:00PM', 'Wed : Closed', 'Thu : 9:00AM - 7:00PM', 'Fri : 9:00AM - 7:00PM', 'Sat : Closed', 'Sun : Closed', ]; function splitOpeningHoursEntry(entry) { // e.g.: 'Mon : 9:00AM - 7:00PM' const indexOfColon = entry.indexOf(':'); // e.g. 5 // entry.substring(0, 5) ... e.g.: 'Mon ' const day = entry.substring(0, indexOfColon); // entry.substring(6) ... e.g.: ' 9:00AM - 7:00PM' const hours = entry.substring(indexOfColon + 1); // e.g.: ['Mon', '9:00AM - 7:00PM'] return [day.trim(), hours.trim()]; } function regexSplitOpeningHoursEntry(entry) { // [https://regex101.com/r/vGRck7/3] // entry.split(':') // ["Mon ", " 9", "00AM - 7", "00PM"] // [https://regex101.com/r/vGRck7/2] // entry.split(/\s*:\s*/) // ["Mon", "9", "00AM - 7", "00PM"] // [https://regex101.com/r/vGRck7/1] // entry.split(/(^[^:\s]+)\s*:\s*/) // ["", "Mon", "9:00AM - 7:00PM"]; return entry.split(/(^[^:\s]+)\s*:\s*/).slice(1); } function compactOpeningHoursEntries(compactEntries, splitEntry, idx, arr) { // get the predecessor item of the currently // processed `splitEntry` item or default to []. const prevSplitEntry = arr[idx - 1] || []; // get the successor item of the currently // processed `splitEntry` item or default to []. const nextSplitEntry = arr[idx + 1] || []; if (prevSplitEntry[1] !== splitEntry[1]) { // in case the previous and current `hours` values do not match ... // ... push the current entry of splitted `day` and `hours` // values into `compactEntries` which is the accumulating // array of the compacted form of all opening hours entries. compactEntries.push(splitEntry); } else if (nextSplitEntry[1] !== splitEntry[1]) { // ... or in case the next and current `hours` values do not match ... const lastCompactEntry = compactEntries[compactEntries.length - 1]; // ...retrieve the first and the last day value // of a compactly written day-range format... const firstDayInRange = lastCompactEntry[0]; const lastDayInRange = splitEntry[0]; // ...and create and rewrite its compact form // as the compacted entry's final day value. lastCompactEntry[0] = firstDayInRange + '-' + lastDayInRange; } return compactEntries; } function concatOpeningHoursEntry([day, hours]) { return `${ day }: ${ hours }`; } // First one needs to separate the `day` from the // `hours` part of a single opening hours string console.log( "splitOpeningHoursEntry('Mon : 9:00AM - 7:00PM') ...", splitOpeningHoursEntry('Mon : 9:00AM - 7:00PM') ); console.log( "regexSplitOpeningHoursEntry('Mon : 9:00AM - 7:00PM') ...", regexSplitOpeningHoursEntry('Mon : 9:00AM - 7:00PM') ); // Then one does map an entire array of opening hours strings // into an array of arrays, where each array item contains the // `day` value as first and the `hours` value as second array item. console.log( '... list item `split` mapping ... ', sampleList .map(splitOpeningHoursEntry) //.map(regexSplitOpeningHoursEntry) ) // On top one has to `reduce` this array of splitted // `[<day>, <hours>]` entries into its compact form. console.log( '... list item `split` mapping and split entry reducing ... ', sampleList .map(splitOpeningHoursEntry) .reduce(compactOpeningHoursEntries, []) ); // Finally one needs to `map` each splitted `[<day>, <hours>]` entry // with a concatenation task back into its human readable string form. console.log( '... list item `split` mapping, reducing and a final concatenation mapping ... ', sampleList .map(splitOpeningHoursEntry) .reduce(compactOpeningHoursEntries, []) .map(concatOpeningHoursEntry) ); .as-console-wrapper { min-height: 100%!important; top: 0; } Another less talkative proof of concept ... function splitOpeningHoursEntry(entry) { return entry.split(/(^[^:\s]+)\s*:\s*/).slice(1); } function concatOpeningHoursEntry([day, hours]) { return `${ day }: ${ hours }`; } function compactOpeningHoursEntries(compactEntries, splitEntry, idx, arr) { const prevSplitEntry = arr[idx - 1] || []; const nextSplitEntry = arr[idx + 1] || []; if (prevSplitEntry[1] !== splitEntry[1]) { compactEntries.push(splitEntry); } else if (nextSplitEntry[1] !== splitEntry[1]) { const lastCompactEntry = compactEntries[compactEntries.length - 1]; const firstDayInRange = lastCompactEntry[0]; const lastDayInRange = splitEntry[0]; lastCompactEntry[0] = firstDayInRange + '-' + lastDayInRange; } return compactEntries; } console.log([ 'Mon : 08:00AM - 17:00PM', 'Tue : 08:00AM - 17:00PM', 'Wed : 08:00AM - 17:00PM', 'Thu : 10:00AM - 14:00PM', 'Fri : 10:00AM - 14:00PM', 'Sat : Closed', 'Sun : Closed', ], '=>', [ 'Mon : 08:00AM - 17:00PM', 'Tue : 08:00AM - 17:00PM', 'Wed : 08:00AM - 17:00PM', 'Thu : 10:00AM - 14:00PM', 'Fri : 10:00AM - 14:00PM', 'Sat : Closed', 'Sun : Closed', ] .map(splitOpeningHoursEntry) .reduce(compactOpeningHoursEntries, []) .map(concatOpeningHoursEntry) ); .as-console-wrapper { min-height: 100%!important; top: 0; }
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,614
\section{Introduction} Color centers in diamond combine optical addressability with long spin coherence times, making them promising candidates for repeater-based quantum networks ~\cite{atature2018material}. Recent proof-of-principle demonstrations of key ingredients for quantum networks using color centers in diamond include on-demand remote entanglement generation ~\cite{humphreys2018deterministic}, coherent control of multiple nearby nuclear spin memories ~\cite{bradley2019ten}, entanglement distillation ~\cite{kalb2017entanglement}, and memory-enhanced quantum communication ~\cite{bhaskar2020experimental}. However, building larger quantum networks that incorporate more than two nodes and span longer distances will require new technological breakthroughs. Currently known platforms suffer from either degraded optical coherence when incorporating into nanophotonic devices ~\cite{chu2014coherent, ruf2019optically}, or limited electron spin coherence times of milliseconds combined with the requirement to work at millikelvin temperatures ~\cite{sukachev2017silicon}. The recently reported SiV$^0$ center in diamond has the potential to overcome many of these challenges ~\cite{rose2018observation,green2017neutral,rose2018strongly,zhang2020optically}. The unique combination of stable optical transitions and long spin coherence times at liquid helium temperature makes the SiV$^0$ center an attractive building block for nodes in quantum networks. One proposal for enhancing the entanglement generation rate in color-center-based quantum networks is to integrate color centers with nanophotonic devices. In particular, optical cavities greatly enhance atom-photon interaction, which improves spin readout and spin-photon entanglement fidelity. Recent progress in diamond nanofabrication techniques has enabled record single-emitter cooperativities up to 105 ~\cite{bhaskar2020experimental}, as well as interfaces with multiple color centers ~\cite{evans2018photon}. Furthermore, nanophotonic devices can enable other functionality such as on-chip quantum frequency conversion (QFC), which is key to achieving long-distance quantum communication. The small mode volume offers an opportunity for highly efficient nonlinear optical interactions even with low pump powers, which increases the signal-to-noise ratio for single photon level signals \cite{li2016efficient}. Monolithic fabrication techniques of diamond nanophotonic cavities require milling or etching bulk single crystal diamond, which leads to low yield ~\cite{nguyen2019integrated} and precludes the on-chip integration of photonic components with other functionality, such as QFC and active devices. Furthermore, there is currently no method for high purity, wafer-scale synthesis of single crystal diamond ~\cite{nelz2019toward}, limiting the scalability of this approach. By contrast, nanofabrication techniques in III-V semiconductor systems are fairly mature. Nanophotonic components with diverse functionalities have been fabricated in III-V semiconductors such as gallium arsenide (GaAs) \cite{dietrich2016gaas} and gallium phosphide (GaP) \cite{Wilson2020}. GaAs photonic crystal cavities with quality factors exceeding $10^4$ have been demonstrated at wavelengths close to the zero-phonon line of the SiV$^0$ center ($\sim$ 946 nm) \cite{hennessy2007quantum, englund2007controlling}. More recently, surface passivation schemes helped to push the quality factors of state-of-the-art cavities to be above $10^5$ \cite{kuruma2020surface}. Other recent efforts in GaP photonics have been spurred by its large optical nonlinearities and transparency over a wide wavelength range ~\cite{Wilson2020}. GaP cavities with high quality factors have been reported using microdisks ~\cite{mitchell2014cavity}, one-dimensional photonic crystal cavities ~\cite{schneider2019optomechanics} and ring resonators ~\cite{Wilson2020}. A promising method to mitigate the constraints imposed by diamond nanofabrication is heterogeneous integration of diamond and a separate device layer material ~\cite{wan2019large}. Instead of fabricating devices directly on single crystal diamond, the photonic device is fabricated in a high-index photonic layer on top of the diamond substrate such that photons can evanescently couple to color centers that are close to the diamond surface (Fig.\ref{fig:schematic}). This scheme has previously been used to demonstrate Purcell enhancement of optical emission from the negatively charged nitrogen vacancy (NV) center in diamond ~\cite{englund2010deterministic, barclay2009hybrid,gould2016efficient} and Er$^{3+}$ ions in yttrium orthosilicate ~\cite{dibos2018atomic, raha2020optical, chen2020parallel}. For NV centers, previous work has involved fabricating GaP waveguides or microdisks on top of diamond, and then subsequently etching into the diamond to realize a high quality factor structure ~\cite{barclay2009hybrid, barclay2011hybrid, gould2016efficient}. This subsequent etching step leads to significant spectral diffusion of the NV center optical transition \cite{ruf2019optically, schmidgall2018frequency}. \begin{figure}[ht!] \centering\includegraphics[width=\textwidth]{Figure1.pdf} \caption{Schematic illustration of the hybrid III-V diamond photonic platform. III-V material (pink) is patterned on top of the diamond substrate (grey). The SiV$^0$ center near the diamond surface is evanescently coupled to a nanobeam photonic crystal cavity with a width of $w_\text{y}$ and a thickness of $w_\text{z}$. The emitted photon is routed to an on-chip frequency conversion module, where FWM-BS scheme is used to translate the signal to single photon at the telecommunication C-band.} \label{fig:schematic} \end{figure} Here, we develop a design for heterogeneously integrated nanophotonic devices to build quantum nodes based on the SiV$^0$ center, using a hybrid III-V diamond platform. One-dimensional photonic crystal cavities enhance the optical emission from single SiV$^0$ centers, and the emission can then be routed on-chip to a microresonator-based frequency converter based on a four-wave mixing Bragg scattering (FWM-BS) scheme. In contrast to previous demonstrations, our design does not require etching into the diamond, avoiding deleterious effects on the color center. \section{Photonic crystal cavity design} Photonic crystal cavities provide mode confinement to a small volume on the order of a cubic wavelength, enabling enhanced coupling between a single color center and the cavity mode \cite{lodahl2015interfacing}. The key figure of merit is the Purcell enhancement factor of the spontaneous emission rate, $P$, which is given by $P = 4g(\vec{r})^2/\kappa \Gamma_0$. Here, the rate of interaction between the cavity mode and a single SiV$^0$ center at the position $\vec{r}$ is characterized by the single-photon Rabi frequency, $g(\vec{r)} = \vec{\mu}\cdot\vec{E}(\vec{r})/\hbar$, where $\vec{\mu}$ is the electric dipole moment of the optical transition for the SiV$^0$ center and $\vec{E}(\vec{r})$ is the electric field strength at the position of the SiV$^0$ center associated with a single photon in the cavity mode. The cavity decay rate, $\kappa$, is related to the quality factor of the cavity, Q, and the cavity resonance frequency, $\omega$, by $\kappa = \omega/Q$. The spontaneous emission rate of the optical transition of interest, $\Gamma_0$, is an intrinsic property of the SiV$^0$ center that can be characterized independent of the cavity. This points to two parameters to optimize to build an efficient spin-photon interface: high Q and concentrated electric field with a good overlap between the color center and the cavity mode to maximize $\vec{E}(\vec{r})$. Here, we optimize the design for a heterogeneously integrated GaP-on-diamond photonic crystal cavity. Suspended GaP photonic crystal cavities with high Q resonances at telecommunication wavelengths have been demonstrated ~\cite{schneider2019optomechanics}. The large bandgap of GaP minimizes absorption losses at near-infrared wavelengths and allows for cavities with high Q. The diamond substrate has a moderately high refractive index ($n_\text{Dia} \sim 2.4$), presenting a design challenge for achieving high Q. In contrast to suspended photonic crystal devices, a diamond substrate expands the light cone into which photons can radiate out of the cavity [Fig.\ref{fig:mpb_design}(c)]. We propose a one-dimensional photonic crystal nanobeam cavity, where high Q resonances can be supported. For realistic beam width and height, the proposed heterogeneously integrated structure supports resonant modes with quality factors $\sim 10^{5}$, on par with state-of-the-art freestanding GaP photonic crystal cavities. \begin{figure}[ht!] \centering\includegraphics[width=\textwidth]{Figure2.pdf} \caption{Design of a GaP nanobeam cavity on a diamond substrate. (a) Mirror strength for different values of $w_\text{y}$ and $w_\text{z}$ with $n_{\text{eff}} = 2.55$. (b) Maximum mirror strength at different $n_{\text{eff}}$ values. (c) Band diagram for a nanobeam waveguide with $(w_\text{y}, w_\text{z}, h_\text{x}, h_\text{y}, a) = (400, 380, 59, 128, 184)$ nm. TE modes and TM modes are indicated by solid black lines and dashed blue lines respectively. The region above the diamond light line is shaded in grey and the fundamental TE bandgap is indicated by the pink shaded area. The target frequency is marked by the dashed red line. (d) Normalized electric field $|E|^2$ distribution (left) in the yz-plane at x = 0 with a line cut of the electric field at y = 0 nm (right). (e) Normalized $E_\text{y}$ field profile of the fundamental cavity mode of the GaP nanobeam cavity. The xy slice is taken at the center of the cavity (i.e. z = 190 nm).} \label{fig:mpb_design} \end{figure} We first consider a GaP nanobeam with waveguide width $w_\text{y}$ and waveguide thickness $w_\text{z}$. As illustrated by Fig.\ref{fig:schematic}, the high refractive index of GaP ($n_\text{GaP} \sim 3.13$) provides the optical mode confinement along the y- and z-direction. The cavity is defined by a one-dimensional lattice of elliptical air holes with lattice constant $a$ and diameters $h_\text{x}$ and $h_\text{y}$ along the x- and y-directions respectively [Fig.\ref{fig:mpb_design}(e), inset]. Starting with the design that maximizes the bandgap, the nanobeam cavity is formed by locally perturbing the periodic array to form a defect \cite{quan2011deterministic}. An adiabatic shift of the band-edge frequencies moves the target frequency from the dielectric band-edge at the defect to the middle of bandgap in the Bragg mirror regions. This can be achieved by either tapering the size of the holes or the lattice constant. As an example, we employ the latter approach to design a symmetric cavity where the lattice constant is gradually increased from $a_\text{cav}$ in the middle of the cavity to $a_\text{mirr}$ in the mirror regions on both sides. To find the design with maximum mirror strength, we choose a lattice constant according to $a = \lambda_0/2n_{\text{eff}}$, where $\lambda_0$ is the target wavelength and $n_{\text{eff}}$ is the effective mode index of the cavity. In the case of the GaP-on-diamond system, the effective indices of the cavity modes have to lie in between the refractive index of diamond and that of GaP, i.e. $n_{\text{eff}} \in (2.4,3.13)$. Using MIT Photonic Bands ~\cite{johnson2001block}, we calculate the band structure of the GaP nanobeam on a diamond substrate for different structural parameters. As modes above the diamond light line are unconfined, we are only interested in those below the light line at $k_\text{x} = \pi / a$. We define the mirror strength as the separation between the target frequency and the nearest quasi-transverse-electric (TE) guided mode normalized by the target frequency. The TE modes and the quasi-transverse-magnetic (TM) modes are defined as modes with most of their electric field components along the y-axis and the z-axis respectively. Fig.\ref{fig:mpb_design}(a) shows the largest mirror strength at different combinations of $w_\text{y}$ and $w_\text{z}$ for $n_{\text{eff}} = 2.55$. At each point on the two-dimensional sweep, we optimize the mirror strength with different combinations of hole diameters (i.e. $h_\text{x}$ and $h_\text{y}$). By repeating the two-dimensional sweep for different $n_{\text{eff}}$ values, we could identify the cavity design with the maximum mirror strength at the target frequency. Fig.\ref{fig:mpb_design}(b) shows the largest mirror strength for different $n_{\text{eff}}$ values. As $n_{\text{eff}}$ gets closer to the refractive index of diamond ($n_\text{Dia} \sim 2.4$), the mirror strength is limited by leaky modes above the diamond light line. On the other hand as $n_{\text{eff}}$ increases, more modes are pushed beneath the light line, and the mirror strength decreases as the modes are pushed closer together. The optimal design (i.e. $n_{\text{eff}} = 2.55$) has an air band-edge frequency slightly below the diamond light line with structural parameters $(w_\text{y}, w_\text{z}, h_\text{x}, h_\text{y}, a)$ = (400, 380, 59, 128, 184) nm. For the cavity design with maximum mirror strength, we plot the band diagram for different modes in Fig.\ref{fig:mpb_design}(c). The desired resonant frequency is in the middle of the fundamental TE bandgap. We achieve a bandgap of $\sim 11\%$. In our cavity design, we keep structural parameters $w_\text{y}$, $w_\text{z}$, $h_\text{x}$ and $h_\text{y}$ to be the same across the entire nanobeam. By varying the lattice constant from $a_\text{cav}$ = 171 nm in the middle of the cavity to $a_\text{mirr}$ = 184 nm in the mirror region, we adiabatically move the dielectric band-edge such that the target resonance frequency is in the middle of the fundamental TE bandgap. The Purcell enhancement factor depends on the overlap between the transition dipole moment for the SiV$^0$ center and the resonant mode. To examine the efficacy of our cavity design approach, we study the cavity mode profile using three-dimensional finite-difference time-domain (FDTD) simulations. As an example, we perform FDTD simulations on a cavity where the lattice constant is parabolically tapered from $a_\text{mirr}$ to $a_\text{cav}$ across 10 holes on a single side (20 holes in total in the cavity region). In order to further increase the mirror strength, we add 20 Bragg mirror holes on both ends of the cavity. The normalized $E_\text{y}$ field components for the fundamental localized cavity mode are plotted in Fig.\ref{fig:mpb_design}(e). Most of the field strength is concentrated in the adiabatically tapered cavity region. The electric field strength at the position of the SiV$^0$ center varies depending on its location in the diamond substrate [Fig.\ref{fig:mpb_design}(d), left]. At around 50 nm into the diamond substrate, where we observed SiV$^0$ centers with stable optical properties and long spin coherence \cite{rose2018observation}, the electric field energy density, $|E(\vec{r})|^2$, is $9.6\%$ of its maximum value in the center of the cavity [Fig.\ref{fig:mpb_design}(d), right]. \begin{figure}[ht!] \centering\includegraphics[width=\textwidth]{Figure3.pdf} \caption{(a) Geometry of the symmetric photonic crystal cavity design. In the cavity region, the hole spacing is adiabatically tapered from $a_\text{cav}$ to $a_\text{mirr}$. In the mirror regions on both sides, the hole spacing is held constant at $a_\text{mirr}$. The schematic shows a design with 20 cavity holes and 12 mirror pairs. Dependence of the GaP nanobeam (b) and the GaAs nanobeam (c) cavity Q on the number of mirror pairs for 10 (purple), 12 (red), 16 (green), 20 (orange) and 30 (blue) holes in the cavity region.} \label{fig:FDTD_sim} \end{figure} We evaluate the scaling of Q with different design parameters using 3D FDTD. The nanobeam cavity can be divided into a cavity region and two Bragg mirror regions [Fig.\ref{fig:FDTD_sim}(a)]. In the cavity region, the lattice constant tapers parabolically from $a_\text{cav}$ in the middle to $a_\text{mirr}$ on both ends. As the cavity is symmetric with respect to x = 0, the total number of holes in the entire cavity region is twice the number of holes on a single side. For simplicity, we will refer to different designs based on the total number of holes in the cavity region in the following discussions. The mirror segments consist of periodic arrays of holes where the lattice constant is fixed at $a_\text{mirr}$. Since our cavity design is symmetric, an additional mirror hole on one end of the cavity implies an extra mirror hole on the opposite side. We define the addition of a pair of Bragg mirror holes as an additional "mirror pair". First we calculate the Q for different cavity designs. Fig.\ref{fig:FDTD_sim}(b) shows the scaling of Q with the number of mirror pairs. For small numbers of holes in the mirror regions, the Q of the cavity increases with the mirror strength. At higher numbers of mirror holes, Q is limited by radiation loss in the cavity region. In order to further push up the quality factor, we taper the lattice constant over more holes in the cavity region. For 30 cavity holes, the Q is saturated over $10^6$ with 20 mirror pairs, and for 20 cavity holes, the saturated Q exceeds $10^5$, which is the current demonstrated state of the art in suspended GaP cavities ~\cite{schneider2019optomechanics}. Our method for designing hybrid photonic platforms can be applied to other material systems with a higher refractive index than diamond. Using the aforementioned design principles, we demonstrate that a similar evanescent coupling scheme can be realized using GaAs-on-diamond. Epitaxially grown GaAs has a higher refractive index ($n_\text{GaAs} \sim 3.55$) compared to GaP. The optimized GaAs nanobeam cavity has structural parameters $(w_\text{y}, w_\text{z}, h_\text{x}, h_\text{y}, a_\text{cav}, a_\text{mirr})$ = (350, 220, 70, 136, 162, 180) nm with a bandgap of $\sim 15.7\%$ in the mirror region. Using 3D FDTD simulations, we can similarly evaluate the scaling of Q for different GaAs photonic crystal cavity designs [Fig.\ref{fig:FDTD_sim}(c)]. We find that for the optimized structure, a smaller number of mirror pairs is required to saturate the Q because the mirror strength is larger. For the optimized cavity designs, we can evaluate the expected Purcell factor for SiV$^0$ located at different locations. After taking local-field correction of the spontaneous emission rate into account \cite{dung2006local}, we can extract the dipole moment for the SiV$^0$ center ($\vec{\mu}$) using the following expression: \begin{equation} \label{spontaneous_decay} \Gamma_0 = \frac{1}{\beta} \cdot \left ( \frac{3n_{\text{Dia}}^2}{2n_{\text{Dia}}^2+1} \right )^2 n_{\text{Dia}} \cdot \frac{|\vec{\mu}|^2 \omega^3}{3\pi \epsilon_0 \hbar c^3} \end{equation} \noindent where $\beta$ is the fraction of the decay rate caused by spontaneous emission at the transition coupled to the cavity. Here, we make the assumptions that the SiV$^0$ center can be treated as a perfect two-level atom and $\beta = 1$. Using the bulk spontaneous emission rate for SiV$^0$ centers $\Gamma_0 = 2\pi \times 88$ MHz \cite{rose2018observation}, we arrive at the calculated $|\vec{\mu}| = 6.027 \times 10^{-29}$ C-m. If we further assume that the dipole moment is parallel to the electric field polarization of the cavity resonant mode, we can estimate the Purcell factor. The inset of Fig.\ref{fig:Purcell_variation}(a) shows the distribution of expected Purcell factor inside the diamond substrate for GaP-on-diamond. At a depth of 50 nm, the maximum Purcell factor is 743 for a GaP nanobeam cavity with 20 cavity holes and a saturated quality factor of $\sim 10^5$. We also study the sensitivity of Purcell factor to displacements in the xy-plane. The maximum Purcell factor can be achieved when the SiV$^0$ is placed directly under the electric field energy density maximum. As Fig.\ref{fig:Purcell_variation}(a) shows, the Purcell factor is more sensitive to misalignments along the nanobeam direction (i.e. x-direction). Similar results can be obtained for a GaAs nanobeam cavity with 20 cavity holes and a saturated quality factor of $\sim 3.5 \times 10^5$ [Fig.\ref{fig:Purcell_variation}(b)]. As we will discuss in Section \ref{section:discussion}, a Purcell factor above 200 for GaP-on-diamond and a Purcell factor above 1000 for GaAs-on-diamond are practical and attainable using widely available fabrication techniques. \begin{figure}[ht!] \centering\includegraphics[width=\textwidth]{Figure4.pdf} \caption{Variations of expected Purcell factors for GaP cavity design with 20 cavity holes and 20 mirror pairs (a) and GaAs cavity design with 20 cavity holes and 12 mirror pairs (b). At a depth of 50 nm in the diamond substrate, the maximum attainable Purcell factor for GaP design is 743 and that for GaAs design is 3921. The displacements, in the x-direction (blue) and y-direction (orange), are defined relative to the optimal position. (Inset) Distribution of Purcell factor inside the diamond substrate, y-z plane at x = 0.} \label{fig:Purcell_variation} \end{figure} \section{On-chip quantum frequency conversion} \label{section:QFC} Low-loss optical fibers provide a scalable way to connect nodes in a long-distance quantum network. To utilize the fiber network, the emission wavelengths of color centers in diamond need to be shifted to overlap with the transparency window of silica fibers. QFC of single photons requires simultaneously achieving high conversion efficiency and low noise. Despite these challenges, recent demonstrations were able to show that the spin-photon entanglement is preserved after the QFC process for NV centers in diamond ~\cite{Dreau2018,Tchebotareva2019}. These experiments used free-space optics in combination with centimeter-long, second-order optically nonlinear ($\chi^{(2)}$) waveguides to perform spectral translations of single photons from the NV emission wavelength (637 nm) to the telecommunication L-band (1588 nm). In addition to leveraging the benefits of low-loss optical fibers, QFC presents a solution to inhomogeneous broadening of color center emission. By detuning the pump laser, the disparate color center emission wavelengths can be shifted to the same telecommunication wavelength, and therefore identical telecom photons can be generated from multiple color-center-based quantum nodes. We propose a compact and power-efficient QFC scheme that can be realized on the hybrid III-V diamond platform. Leveraging the large third order optical nonlinearities ($\chi^{(3)}$) of III-V semiconductors such as GaAs and GaP, we show that a microresonator-based FWM-BS scheme can efficiently translate the emitted photons from SiV$^0$ centers to the telecommunication C-band (1550 nm). In the FWM-BS scheme, two pump fields, $\omega_\text{p1}$ and $\omega_\text{p2}$, create an effective modulation of $\chi^{(3)}$ and scatter an input signal photon ($\omega_\text{s}$) to the idler frequency $\omega_\text{i}$ [Fig.\ref{fig:QFC_intro}(a)]. In principle, this enables noise-free frequency translation of the signal photon by an amount set by the difference between the two pump frequencies \cite{mckinstrie2005translation}. QFC based on FWM-BS has recently been demonstrated to convert the emission of InAs/GaAs quantum dots using Si$_\text{3}$N$_\text{4}$ micro-ring resonators \cite{Singh2018}. Owing to their large $\chi^{(3)}$, III-V semiconductors are promising material platforms in realizing efficient frequency conversion with low pump powers \cite{chang2020ultra, Wilson2020}. Here, we extend the functionalities of the hybrid III-V diamond system and outline our design of an integrated QFC module based on GaP. Because nonlinear optical effects are generally quite weak, appreciable conversion efficiency requires long interaction times between the pumps and the signal, as well as high pump powers \cite{Uesaka2003, Marhic2008}. By employing a resonant optical structure such as a whispering gallery mode (WGM) microring resonator, strong interactions can be achieved with small device footprints, allowing for high conversion efficiencies in a scalable platform \cite{Chang2019, Lu2019, Elshaari2020}. These resonant structures additionally allow for smaller mode areas than bulk crystals, resulting in enhanced nonlinearities at low pump powers \cite{Absil:00, Rodriguez:07}. We consider here a 25 $\mu$m radius ring resonator with coupling of 946 nm and 1550 nm TE modes using separate point waveguide couplers as shown in Fig.\ref{fig:QFC_intro}(b). To design a ring resonator geometry for QFC of SiV$^{0}$ emission using FWM-BS, we consider the case of a weak continuous-wave field with frequency $\omega_\text{s}$ as the input signal. When combined with pumps at frequencies $\omega_\text{p1, p2}$ the fields interact to scatter the signal into two new signals, the "idlers", at frequencies $\omega_\text{i$\pm$} = \omega_\text{p2} \pm |\omega_\text{p1} - \omega_\text{s}|$ where $\omega_\text{p1}$ is the pump in the 946 nm band, and $\omega_\text{p2}$ is the pump in the 1550 nm band. In a WGM resonator there are discrete resonances, so it is critical that the converted idlers lie on cavity resonances to achieve appreciable conversion efficiency. Within a given frequency band, the cavity resonances can be described by a Taylor series expansion in orders of the integer mode number $\mu$ relative to some central cavity resonance frequency $\hat{\omega}_\text{0}$ as described by Eq. \eqref{CME_Resonant_Freq_Expansion} \cite{Fujii2020}. \begin{equation} \hat{\omega}_{\mu} = \hat{\omega}_{0} + D_\text{1}\mu + \frac{1}{2}D_\text{2}\mu^{2} + \frac{1}{6}D_\text{3}\mu^{3} + ... \label{CME_Resonant_Freq_Expansion} \end{equation} \noindent where the coefficients $D_\text{n} \equiv (1/R^{n}) (\partial^{n} \omega / \partial \beta^{n})|_{\omega_0}$ describe the dispersion of the device at the central resonance. The propagation constant is given by $\beta = 2 \pi n_{\text{eff}}(\lambda)/\lambda$, and R is the ring radius. Following the treatment and notation in \cite{li2016efficient}, we denote the cavity resonances nearest the signal, pump, and idler frequencies by $\hat{\omega}_\text{s,p1,p2,i$\pm$}$. For a given pump-signal separation $|\mu|$ in the 946 nm band, the detuning of the converted idlers from their nearest cavity resonance in the 1550 nm band is given by \begin{equation} \delta \hat{\omega}_{i\pm,|\mu|} = \pm(D_{1}^{1550} - D_{1}^{946})|\mu| + \frac{1}{2} (D_\text{2}^{1550} \mp D_\text{2}^{946}) |\mu|^{2} \pm \frac{1}{6} (D_\text{3}^{1550} - D_\text{3}^{946}) |\mu|^{3} + ... \label{Idler_Detuning} \end{equation} \noindent The superscripts denote the dispersion coefficients evaluated at the central resonances in the 946 nm and 1550 nm bands respectively. To minimize the idler detuning we seek a device geometry with similar dispersion in both bands. Since $D_\text{1} = (1/R)c/n_\text{g}$, where $n_\text{g}$ is the group index, this can be achieved to first order in $|\mu|$ by designing a ring resonator cross-section for which $n_\text{g}$ at 946 nm and 1550 nm is matched. To find such a design, we perform 2D eigenmode simulations of a rectangular GaP waveguide on diamond. The waveguide is defined by its width, $w_\text{y}$, and thickness, $w_\text{z}$, as defined in the previous section. We perform a parameter sweep over the waveguide aspect ratio (AR), defined as $AR = w_\text{y}/w_\text{z}$, and the total cross-sectional area, $A = w_\text{y} w_\text{z}$. In general, we observe that at smaller ARs and smaller areas we achieve greater group index dispersion and better matching between the fundamental TE modes at 946 nm and 1550 nm as shown in Fig.\ref{fig:QFC_intro}(c). The AR and area cannot be reduced arbitrarily, however, because the telecom mode is pushed into cut-off for very small areas and extreme ARs. To avoid cut-off, smaller ARs can be compensated with a larger device area, and smaller device areas can be similarly compensated by a larger AR. As such, there is a tradeoff between AR and area which can be optimized to achieve group index matching without cut-off. In Fig.\ref{fig:QFC_intro}(d) we present simulations for a 25 $\mu$m radius ring resonator, sweeping the resonator width for different thicknesses, from which we select a cross-section geometry with $w_\text{z}$ = 640 nm, $w_\text{y}$ = 500 nm. The group velocity dispersion of the device, shown in Fig.\ref{fig:QFC_intro}(e), exhibits normal dispersion at both 946 nm and 1550 nm. \begin{figure}[ht!] \centering\includegraphics[width=\textwidth]{Figure5.pdf} \caption{(a) Four-wave-mixing Bragg-scattering frequency conversion scheme. The two pump fields, $\omega_\text{p1}$ at 946 nm, and $\omega_\text{p2}$ at 1550nm, combine in a $\chi^{(3)}$ medium to scatter the input SiV$^0$ photon ($\omega_\text{s}$) to idlers $\omega_\text{i$\pm$}$ in the 1550 nm band. (b) The QFC module consists of a microring resonator with two point-couplers to separately couple 946 nm and 1550 nm TE modes. A high-Q resonator allows for prolonged interactions between the fields in a small mode volume, leading to enhanced nonlinear interactions. (c) Group index of the TE modes as a function of wavelength for different waveguide areas and ARs. Solid lines represent a total cross-sectional area of $A=0.35\mu m^2$, dashed lines are for $A=0.55\mu m^2$. For each cross-sectional area, ARs of 0.75 (blue), 1 (orange) and 1.5 (red) are plotted. (d) Simulated group index mismatch $\Delta n_\text{g} = n_\text{g}^{946} - n_\text{g}^{1550}$ as a function of resonator width, $w_\text{y}$ and thickness, $w_\text{z}$. We achieve $\Delta n_\text{g}=0$ for several cross-sections, and the geometries are relatively insensitive to small variations in width and thickness. (e) Dispersion of the selected ring resonator cross-section ($w_\text{z} = 640$ nm, $w_\text{y} = 500$ nm) with zero-crossings, i.e. $\text{D}=0$, at 1155 nm and 1536 nm. (f) Coupling quality factor as a function of wavelength for two separate point couplers. The red line represents a 640 nm $\times$ 500nm coupling waveguide at a gap of 210 nm. The blue line represents a 380 nm $\times$ 300 nm coupling waveguide at a gap of 10 nm.} \label{fig:QFC_intro} \end{figure} To couple light into the ring resonator, we design two separate point couplers as described in Fig.\ref{fig:QFC_intro}(f) \cite{Bogaerts2012}. A coupler waveguide with cross-section 640 nm $\times$ 500 nm and 210 nm gap is utilized to couple the mode at 1550 nm with coupling quality factor $Q_\text{C,2}(1550 \text{ nm})=7.71\times10^4$. As the 946 nm mode is much more strongly confined than the telecom mode, there is minimal mode overlap between the 946 nm mode in the coupler and ring resonator, resulting in a severely undercoupled quality factor $Q_\text{C,2}(946\text{ nm})=3.52\times10^8$. To couple the 946 nm signal we utilize a point coupler with cross-section 380 nm $\times$ 300 nm with a 10 nm gap from the resonator, resulting in a coupling quality factor $Q_\text{C,1}(946\text{ nm})=7.64\times10^4$. The thickness of the 946 nm coupler is chosen to match that of the photonic crystal cavity design from the previous section, and a tapering of the width is introduced to achieve the desired coupling Q. Due to the small cross-sectional area, the telecom mode is in cut-off for this coupler \cite{Bi2012}, and as such there is very weak coupling of the telecom mode from the resonator into the waveguide, leading to a coupling quality factor $Q_\text{C,1}(1550\text{ nm}) = 2.70\times10^7$. For both modes we calculate an effective coupling quality factor defined as $1/Q_\text{C} = 1/Q_\text{C,1} + 1/Q_\text{C,2}$ leading to $Q_\text{C}(946\text{ nm})=7.64\times10^4$ and $Q_\text{C}(1550\text{ nm})=7.69\times10^4$. To evaluate the FWM-BS conversion efficiency in our designed coupler-resonator system, we study analytic coupled mode equations \eqref{CME_Signal} - \eqref{CME_i_minus} for the fields inside a resonator with a $\chi^{(3)}$ as described in detail in \cite{li2016efficient}. In addition to our pumps, signal, and idlers, we consider mixing between the signal and the 946 nm band pump which scatters the signal into an auxiliary signal $\omega_\text{s'} = 2\omega_\text{p1} - \omega_\text{s}$ as shown in Fig.\ref{fig:QFC_CME_Results}(a). \begin{equation} t_\text{R} \frac{dE_\text{s}}{dt} = - (\alpha_\text{p1} + i \Delta \phi_\text{s})E_\text{s} + i \gamma_\text{p1}LE^{2}_\text{p1}E^{*}_\text{s'} + i2\gamma_\text{p1}LE_\text{p1}(E_\text{p2}E^{*}_\text{i-} + E_\text{p2}^{*}E_\text{i+}) + i \sqrt{\theta_\text{1}P_\text{s}} \label{CME_Signal} \end{equation} \begin{equation} t_\text{R} \frac{dE_\text{s'}}{dt} = - (\alpha_\text{p1} + i \Delta \phi_\text{s'})E_\text{s'} + i \gamma_\text{p1}LE^{2}_\text{p1}E^{*}_\text{s} + i2\gamma_\text{p1}LE_\text{p1}(E_\text{p2}E^{*}_\text{i+} + E_\text{p2}^{*}E_\text{i-}) \label{CME_Aux} \end{equation} \begin{equation} t_\text{R} \frac{dE_\text{i+}}{dt} = - (\alpha_\text{p2} + i \Delta \phi_\text{i+})E_\text{i+} + i \gamma_\text{p2}LE^{2}_\text{p2}E^{*}_\text{i-} + i2\gamma_\text{p2}LE_\text{p2}(E_\text{p1}E^{*}_\text{s'} + E_\text{p1}^{*}E_\text{s}) \label{CME_i_plus} \end{equation} \begin{equation} t_\text{R} \frac{dE_\text{i-}}{dt} = - (\alpha_\text{p2} + i \Delta \phi_\text{i-})E_\text{i-} + i \gamma_\text{p2}LE^{2}_\text{p2}E^{*}_\text{i+} + i2\gamma_\text{p2}LE_\text{p2}(E_\text{p1}E^{*}_\text{s} + E_\text{p1}^{*}E_\text{s'}) \label{CME_i_minus} \end{equation} \noindent Where $\text{p}1$ and $\text{p}2$ denote the pumps at 946 nm and 1550 nm respectively, $L=2 \pi R$ is the cavity round-trip length, $t_\text{R}=2 \pi / D_\text{1}$ is the round-trip time, and $\alpha_m=\hat{\omega}_\text{m} t_\text{R} / 2Q_\text{L,m}$ is the cavity loss for a resonance $m$ with loaded quality factor $Q_\text{L,m}$. The effective nonlinear parameter is given by $\gamma=n_2\omega/cA_\text{eff}$. $P_\text{s}$ is the power of the signal at the waveguide input which is coupled into the resonator with power coupling ratio $\theta_\text{1}=\hat{\omega}_\text{p1}t_\text{R}/Q_\text{C}(946 \text{ nm})$. Here we neglect nonlinear effects arising from the signal and auxiliary signal. The round trip loss, nonlinear parameters, and power coupling ratios are assumed to be constant within the 946 nm and 1550 nm bands. The $\Delta \phi_\text{m}$ terms describe the effective detuning of the modes from their nearest cavity resonance, including the effects of Kerr nonlinear frequency shifts of the cavity resonances caused by the strong pumps, given by: \begin{equation} \Delta \phi_\text{s,s'} = (\hat{\omega}_\text{s,s'} - \omega_\text{s,s'})t_\text{R} - 2\gamma_\text{p1}L(|E_\text{p1}|^{2} + |E_\text{p2}|^{2}) \label{CME_Detuning_signals} \end{equation} \begin{equation} \Delta \phi_\text{i$\pm$} = (\hat{\omega}_\text{i$\pm$} - \omega_\text{i$\pm$})t_\text{R} - 2\gamma_\text{p2}L(|E_\text{p1}|^{2} + |E_\text{p2}|^{2}) \label{CME_Detuning_idlers} \end{equation} To calculate $\gamma_\text{p1,p2}$ we first perform 2D eigenmode simulations of the ring cross-section to extract the TE field profiles at 946 nm and 1550 nm. We then calculate the effective mode area, A$_\text{eff}$, as defined in Eq. \eqref{CME_Aeff} where the integral in the denominator is evaluated over the resonator cross-section only: \begin{equation} A_\text{eff} = \frac{\left( \int \int \epsilon_\text{r}(x,y) |E(x,y)|^{2} dxdy \right)^{2}} {\int \int_\text{Ring} \epsilon_\text{r}^{2}(x,y) |E(x,y)|^{4} dxdy} \label{CME_Aeff} \end{equation} We determine the central resonance in the 946 nm and 1550 nm bands via 2D eigenmode simulations of the effective index of the TE modes, yielding resonances at $\hat{\omega}_\text{p1} = 2 \pi c / 946.6 \text{ nm}$ and $\hat{\omega}_\text{p2} = 2 \pi c /1547.8 \text{ nm}$ respectively. We solve Eq. \eqref{CME_Signal} - \eqref{CME_i_minus} in the steady state and compute the conversion efficiency, $\eta_\text{\text{conv}}$, in terms of idler photon flux at the output of the coupling waveguide relative to the signal photon flux at the input: \begin{equation} \eta_\text{conv} = \frac{\theta_\text{2} |E_\text{i$\pm$}|^{2}/\hbar \omega_\text{i$\pm$}}{P_\text{s}/ \hbar \omega_\text{s}} \label{CME_Conversion_Efficiency} \end{equation} \noindent Where $\theta_\text{2}=\hat{\omega}_\text{p2}t_\text{R}/Q_\text{C}(1550 \text{ nm})$. Table \ref{table:QFC_constants} summarizes the constants used in the analysis. \begin{table}[ht!] \centering \resizebox{\textwidth}{!} {\begin{tabular}{ c c c c c c c c } \hline Wavelength Band & $D_1/2\pi$ (MHz) & $D_2/2\pi$ (MHz) & $D_3/2\pi$ (MHz) & $Q_i$ ($10^4$) & $Q_\text{C}$ ($10^4$) & $n_2$ ($10^{-17} m^2 W^{-1}$) & $A_\text{eff}$ ($\mu m^2$)\\ \hline 946 nm & 531 & -122 & -0.964 & 7.5 & 7.64 & 0.85 \cite{Grinblat2019} & 0.196 \\ 1550 nm & 531 & -44.2 & 13.1 & 7.5 & 7.69 & 1.1 \cite{Wilson2020} & 0.292 \\ \hline \end{tabular}} \caption{Constants used for solving the coupled mode equations} \label{table:QFC_constants} \end{table} \begin{figure}[ht!] \centering\includegraphics[width=\textwidth]{Figure6.pdf} \caption{(a) Signal, pump, and idler frequencies are shown as arrows, relative to their nearest cavity resonances. Mixing between the signal, telecom pump, and 946 nm pump leads to conversion into idlers which are offset from the telecom pump by an amount equal to the frequency difference between the signal and 946 nm pump. The frequency shift can occur either to the red (i-) or to the blue (i+). In addition to the idlers, FWM-BS between the signal and 946 nm pump leads to conversion into an auxiliary signal s' with frequency $\omega_\text{s'} = \omega_\text{p1} + (\omega_\text{p1} - \omega_\text{s})$ (b) Conversion efficiency of the i+ idler as a function of signal detuning, $\delta\lambda_\text{s}$, normalized by FWHM of the signal resonance, $\Delta\lambda_\text{s}$, for different total pump power values. The ratio of total power in each pump is kept equal. The signal wavelength is swept near the cavity resonance $\mu=1$. (c) Conversion efficiency bandwidth of the i$\pm$ idlers. As the pump-signal separation $\mu$ is increased, the converted idler frequencies are correspondingly shifted further from the 1550 nm pump. The top (bottom) x-axis indicates the conversion efficiency of the i- (i+) idler at a given output wavelength. Conversion efficiency drops for larger $\mu$ as the idlers become increasingly detuned from their nearest cavity resonance. The conversion efficiency for i$\pm$ is non-symmetric because the resonance spacing is not identical at wavelengths greater than 1550 nm and wavelengths smaller than 1550 nm. (d) Conversion efficiency for the i+ idler as a function of the ring total quality factor $Q_\text{L}$ and total pump power, under the assumption of critical coupling at both 946 nm and 1550 nm for a pump-signal separation $\mu=1$. The vertical dashed line indicates a loaded quality factor $Q_\text{L}=3.75 \times 10^{4}$.} \label{fig:QFC_CME_Results} \end{figure} In Fig.\ref{fig:QFC_CME_Results}(b) we analyze the conversion efficiency for the i+ idler as a function of normalized signal detuning from the $\mu=1$ cavity resonance for different total pump powers. We assume a ring with intrinsic quality factor $Q_\text{i}=7.5 \times 10^{4}$, yielding loaded quality factors $Q_\text{L}^{946}=3.78 \times 10^{4}$ and $Q_\text{L}^{1550}=3.80 \times 10^{4}$ using the coupler design from Fig.\ref{fig:QFC_intro}(f). As GaP microring resonators with loaded quality factors in excess of $10^{5}$ have been experimentally demonstrated in the telecom band \cite{Wilson2018, Wilson2020} we believe this to be a conservative estimate. We achieve a near-unity conversion efficiency of $-0.21$ dB for i+ at a total pump power of $50$ mW and signal detuning of $0.95\Delta\lambda_s$, where $\Delta\lambda_{s}$ is the full width at half maximum (FWHM) of the signal resonance. At higher pump powers the signal cavity resonance experiences a nonlinear Kerr shift which requires the signal frequency to be correspondingly shifted to achieve maximal conversion efficiency. Fig.\ref{fig:QFC_CME_Results}(c) shows the conversion efficiency into both idlers as a function of signal-pump separation $\mu$. At larger values of $\mu$ conversion efficiency is reduced due to dispersion in the resonator: the resonances are not equally spaced with the same magnitude for both wavelength bands, and this difference increases with larger detuning from the central pump resonance. The efficiency for i$\pm$ is not symmetric because the dispersion differs at wavelengths below the central resonance as compared to wavelengths above the central resonance. As such, the detuning for i- is not the same as the detuning for i+. In Fig.\ref{fig:QFC_CME_Results}(d) we analyze conversion efficiency into i+ as a function of the ring loaded quality factor $Q_\text{L}$ for different total pump powers under the assumption of critical coupling $Q_\text{C}=Q_\text{i}$. For a total pump power of 30 mW, we predict unity conversion efficiency for $Q_\text{L} = 4.9 \times 10^4$ at the optimal signal detuning, and for 10 mW we require $Q_\text{L} = 8.6 \times 10^4$. Moving to lower pump powers is favorable for reducing the effects of nonlinear and thermal frequency shifts of the cavity resonances which can make it difficult for the pumps and signal to be simultaneously on-resonance with the cavity modes \cite{li2016efficient}. In addition to the assumptions of critical coupling, we neglect pump depletion and parasitic nonlinear processes such as higher order idler generations in the 1550 nm band. These processes serve to deplete the primary idlers and thereby reduce the conversion efficiency \cite{li2016efficient}. \section{Discussion} \label{section:discussion} In this paper, we have demonstrated the potential of hybrid III-V diamond platforms for quantum networks based on SiV$^0$ centers in diamond. We show that a heterogeneously integrated nanobeam cavity can enhance the photon emission from a single SiV$^0$ center, and the emitted photon can be subsequently converted to the telecommunication C-band on chip via a FWM-BS scheme, providing an efficient interface between the SiV$^0$ center and photons at telecommunication wavelengths. In a fully integrated platform, the system will require joint optimization between Purcell enhancement and frequency conversion. Specifically, there will be a trade-off between the photon collection efficiency into the cavity (i.e. $P/(P+1)$) and the maximum conversion efficiency achievable with the frequency converter because the optical linewidth will radiatively broaden beyond the bandwidth of the microresonator-based frequency converter \cite{Singh2018}. For the system presented here, the linewidth of SiV$^0$ emission (88 MHz) is narrow compared to the loaded linewidth of the ring resonator in Section \ref{section:QFC} ($\sim 8.4$ GHz), which allows for a Purcell factor of around 100 before the broadened optical linewidth reaches the bandwidth of the ring resonator. An example of such a device would utilize a large number of cavity holes (greater than 20) to suppress far-field scattering and a small number of mirror segments (less than 5) to allow for decay into the waveguide mode, with a total Purcell factor of less than 100 and a collection efficiency over 90$\%$. The proposed platform can be experimentally realized using widely available existing fabrication techniques. Here, we comment on practical implementation of the hybrid platform. Heterogeneous integration of quantum photonic elements with different functionalities has been demonstrated in a wide variety of material platforms \cite{Elshaari2020}. Among different fabrication approaches to assemble the elements, epitaxial lift-off, in which the III-V membrane layer is released from the heteroepitaxially-grown substrate through selective wet etching and subsequently transferred onto diamond \cite{gould2016efficient}, and transfer printing, where suspended photonic devices can be picked up and transferred using a rubber stamp \cite{katsumi2018transfer, dibos2018atomic}, are promising routes towards large-scale integration. The two techniques differ in the method for aligning photonic elements to the color centers. With the epitaxial lift-off approach, the nanobeam cavity and the micro-resonator for QFC can be defined by lithography with sub-50 nm alignment accuracy. In principle, this would enable large-scale photonic circuit elements with more complex functionality \cite{gould2016large}. However, the applicability of this technique may be limited by the inhomogeneous distribution of SiV$^0$ centers and the spectral variation of nanobeam cavities resulting from fabrication imperfections. On the other hand, transfer printing allows the integration of pre-selected color centers in diamond with pre-characterized photonic devices. Previous work has demonstrated similar alignment accuracy to the epitaxial lift-off approach \cite{katsumi2018transfer}. Despite the challenges in achieving large-scale photonic device integration, this method promises a robust and reproducible way to couple individual SiV$^0$ centers to high-performance photonic devices. While our hybrid platform design targets SiV$^0$ centers in diamond with an emission wavelength of 946 nm, our general design approach for designing the bandgap for the nanobeam cavity and the dispersion for miroresonator-based FWM-BS can be applied to a wide variety of color centers in diamond and other host materials. For example, the GaP-on-diamond platform could be applied to the negatively charged silicon vacancy centers in diamond due to the wide transparancy window of GaP. Furthermore, while the proposed FWM-BS QFC scheme is not feasible for SiV$^0$ in the GaAs-on-diamond platform due to the band edge absorption feature which dominates the index dispersion at 946 nm, GaAs could be used for color centers in silicon carbide that emit in the near-infrared wavelength ranges \cite{diler2020coherent, wolfowicz2020vanadium}. \section*{Acknowledgements} The authors would like to thank Kartik Srinivasan, Alejandro Rodriguez, Zi-Huai Zhang, and Paul Stevenson for helpful discussions about the photonic crystal cavity design and FWM-BS scheme. This work was primarily supported by DARPA under Young Faculty Award (award number D18AP00047), and work on the neutral silicon vacancy center was supported by the NSF under the EFRI ACQUIRE program (grant 1640959) and the Air Force Office of Scientific Research under award number FA9550-17-0158. M. R. and J. D. T. were additionally supported by the Air Force Office of Scientific Research (grant FA9550-18-1-0081). D. H. was supported by a National Science Scholarship from A*STAR, Singapore. A. A. was supported by a Post Graduate Scholarship from the Natural Sciences and Engineering Research Council of Canada. \section*{Disclosures} The authors declare no conflicts of interests.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,811
\section{Introduction} Language modeling is a fundamental task in artificial intelligence and natural language processing (NLP), with applications in speech recognition, text generation, and machine translation. A language model is formalized as a probability distribution over a sequence of strings (words), and traditional methods usually involve making an $n$-th order Markov assumption and estimating $n$-gram probabilities via counting and subsequent smoothing \cite{Chen1998}. The count-based models are simple to train, but probabilities of rare $n$-grams can be poorly estimated due to data sparsity (despite smoothing techniques). Neural Language Models (NLM) address the $n$-gram data sparsity issue through parameterization of words as vectors (word embeddings) and using them as inputs to a neural network \cite{Bengio2003,Mikolov2010}. The parameters are learned as part of the training process. Word embeddings obtained through NLMs exhibit the property whereby semantically close words are likewise close in the induced vector space (as is the case with non-neural techniques such as Latent Semantic Analysis \cite{Deerwester1990}). While NLMs have been shown to outperform count-based $n$-gram language models \cite{Mikolov2011}, they are blind to subword information (e.g. morphemes). For example, they do not know, a priori, that \emph{eventful}, \emph{eventfully}, \emph{uneventful}, and \emph{uneventfully} should have structurally related embeddings in the vector space. Embeddings of rare words can thus be poorly estimated, leading to high perplexities for rare words (and words surrounding them). This is especially problematic in morphologically rich languages with long-tailed frequency distributions or domains with dynamic vocabularies (e.g. social media). In this work, we propose a language model that leverages subword information through a character-level convolutional neural network (CNN), whose output is used as an input to a recurrent neural network language model (RNN-LM). Unlike previous works that utilize subword information via morphemes \cite{Botha2014,Luong2013}, our model does not require morphological tagging as a pre-processing step. And, unlike the recent line of work which combines input word embeddings with features from a character-level model \cite{Santos2014a,Santos2015}, our model does not utilize word embeddings at all in the input layer. Given that most of the parameters in NLMs are from the word embeddings, the proposed model has significantly fewer parameters than previous NLMs, making it attractive for applications where model size may be an issue (e.g. cell phones). To summarize, our contributions are as follows: \begin{itemize} \item on English, we achieve results on par with the existing state-of-the-art on the Penn Treebank (PTB), despite having approximately $60\%$ fewer parameters, and \item on morphologically rich languages (Arabic, Czech, French, German, Spanish, and Russian), our model outperforms various baselines (Kneser-Ney, word-level/morpheme-level LSTM), again with fewer parameters. \end{itemize} We have released all the code for the models described in this paper.\footnote{\url{https://github.com/yoonkim/lstm-char-cnn}} \section{Model} The architecture of our model, shown in Figure~\ref{fig:network}, is straightforward. Whereas a conventional NLM takes word embeddings as inputs, our model instead takes the output from a single-layer character-level convolutional neural network with max-over-time pooling. For notation, we denote vectors with bold lower-case (e.g. $\mathbf{x}_t,\mathbf{b}$), matrices with bold upper-case (e.g. $\mathbf{W}, \mathbf{U}^o$), scalars with italic lower-case (e.g. $x,b$), and sets with cursive upper-case (e.g. $\mathcal{V}, \mathcal{C}$) letters. For notational convenience we assume that words and characters have already been converted into indices. \subsection{Recurrent Neural Network} A recurrent neural network (RNN) is a type of neural network architecture particularly suited for modeling sequential phenomena. At each time step $t$, an RNN takes the input vector $\mathbf{x}_t \in \mathbb{R}^n$ and the hidden state vector $\mathbf{h}_{t-1} \in \mathbb{R}^m$ and produces the next hidden state $\mathbf{h}_t$ by applying the following recursive operation: \begin{equation} \mathbf{h}_t = f(\mathbf{W} \mathbf{x}_t + \mathbf{U} \mathbf{h}_{t-1} + \mathbf{b}) \end{equation} Here $\mathbf{W} \in \mathbb{R}^{m \times n}, \mathbf{U} \in \mathbb{R}^{m \times m}, \mathbf{b} \in \mathbb{R}^{m}$ are parameters of an affine transformation and $f$ is an element-wise nonlinearity. In theory the RNN can summarize all historical information up to time $t$ with the hidden state $\mathbf{h}_t$. In practice however, learning long-range dependencies with a vanilla RNN is difficult due to vanishing/exploding gradients \cite{Bengio1994}, which occurs as a result of the Jacobian's multiplicativity with respect to time. Long short-term memory (LSTM) \cite{Hochreiter1997} addresses the problem of learning long range dependencies by augmenting the RNN with a memory cell vector $\mathbf{c}_t \in \mathbb{R}^n$ at each time step. Concretely, one step of an LSTM takes as input $\mathbf{x}_t, \mathbf{h}_{t-1}, \mathbf{c}_{t-1}$ and produces $\mathbf{h}_t$, $\mathbf{c}_t$ via the following intermediate calculations: \begin{equation} \begin{split} \mathbf{i}_t &= \sigma (\mathbf{W}^i \mathbf{x}_t + \mathbf{U}^i \mathbf{h}_{t-1} + \mathbf{b}^i) \\ \mathbf{f}_t &= \sigma (\mathbf{W}^f \mathbf{x}_t + \mathbf{U}^f \mathbf{h}_{t-1} + \mathbf{b}^f) \\ \mathbf{o}_t &= \sigma (\mathbf{W}^o \mathbf{x}_t + \mathbf{U}^o \mathbf{h}_{t-1} + \mathbf{b}^o) \\ \mathbf{g}_t &= \mbox{tanh} (\mathbf{W}^g \mathbf{x}_t + \mathbf{U}^g \mathbf{h}_{t-1} + \mathbf{b}^g) \\ \mathbf{c}_t &= \mathbf{f}_t \odot \mathbf{c}_{t-1} + \mathbf{i}_t \odot \mathbf{g}_t \\ \mathbf{h}_t &= \mathbf{o}_t \odot \mbox{tanh} (\mathbf{c}_t) \end{split} \end{equation} Here $\sigma(\cdot)$ and $\mbox{tanh}(\cdot)$ are the element-wise sigmoid and hyperbolic tangent functions, $\odot$ is the element-wise multiplication operator, and $\mathbf{i}_t$, $\mathbf{f}_t$, $\mathbf{o}_t$ are referred to as {\em input}, {\em forget}, and {\em output} gates. At $t=1$, $\mathbf{h}_0$ and $\mathbf{c}_0$ are initialized to zero vectors. Parameters of the LSTM are $\mathbf{W}^j, \mathbf{U}^j, \mathbf{b}^j$ for $j \in \{i, f, o, g\}$. Memory cells in the LSTM are additive with respect to time, alleviating the gradient vanishing problem. Gradient exploding is still an issue, though in practice simple optimization strategies (such as gradient clipping) work well. LSTMs have been shown to outperform vanilla RNNs on many tasks, including on language modeling \cite{Sundermeyer2012}. It is easy to extend the RNN/LSTM to two (or more) layers by having another network whose input at $t$ is $\mathbf{h}_t$ (from the first network). Indeed, having multiple layers is often crucial for obtaining competitive performance on various tasks \cite{Pascanu2013}. \begin{figure}[!t] \center \includegraphics[scale=0.50]{network.png} \caption{Architecture of our language model applied to an example sentence. Best viewed in color. Here the model takes {\em absurdity} as the current input and combines it with the history (as represented by the hidden state) to predict the next word, {\em is}. First layer performs a lookup of character embeddings (of dimension four) and stacks them to form the matrix $\mathbf{C}^k$. Then convolution operations are applied between $\mathbf{C}^k$ and multiple filter matrices. Note that in the above example we have twelve filters---three filters of width two (blue), four filters of width three (yellow), and five filters of width four (red). A max-over-time pooling operation is applied to obtain a fixed-dimensional representation of the word, which is given to the highway network. The highway network's output is used as the input to a multi-layer LSTM. Finally, an affine transformation followed by a softmax is applied over the hidden representation of the LSTM to obtain the distribution over the next word. Cross entropy loss between the (predicted) distribution over next word and the actual next word is minimized. Element-wise addition, multiplication, and sigmoid operators are depicted in circles, and affine transformations (plus nonlinearities where appropriate) are represented by solid arrows.} \label{fig:network} \end{figure} \subsection{Recurrent Neural Network Language Model} Let $\mathcal{V}$ be the fixed size vocabulary of words. A language model specifies a distribution over $w_{t+1}$ (whose support is $\mathcal{V}$) given the historical sequence $w_{1:t} = [w_1, \dots , w_t]$. A recurrent neural network language model (RNN-LM) does this by applying an affine transformation to the hidden layer followed by a softmax: \begin{equation} \mbox{Pr}(w_{t+1} = j|w_{1:t}) = \frac{\mbox{exp}(\mathbf{h}_t \cdot \mathbf{p}^j + q^j)}{\sum_{j' \in \mathcal{V}} \mbox{exp}(\mathbf{h}_t \cdot \mathbf{p}^{j'} + q^{j'})} \end{equation} where $\mathbf{p}^j$ is the $j$-th column of $\mathbf{P} \in \mathbb{R}^{m \times |\mathcal{V}|}$ (also referred to as the {\em output embedding}),\footnote{In our work, predictions are at the word-level, and hence we still utilize word embeddings in the output layer.} and $q^j$ is a bias term. Similarly, for a conventional RNN-LM which usually takes words as inputs, if $w_t = k$, then the input to the RNN-LM at $t$ is the {\em input embedding} $\mathbf{x}^k$, the $k$-th column of the embedding matrix $\mathbf{X} \in \mathbb{R}^{n \times |\mathcal{V}|}$. Our model simply replaces the input embeddings $\mathbf{X}$ with the output from a character-level convolutional neural network, to be described below. If we denote $w_{1:T} = [w_1, \cdots, w_T]$ to be the sequence of words in the training corpus, training involves minimizing the negative log-likelihood ($NLL$) of the sequence \begin{equation} NLL = -\sum_{t=1}^T \mbox{log } \mbox{Pr}(w_{t} | w_{1:t-1}) \end{equation} which is typically done by truncated backpropagation through time \cite{Werbos1990,Graves2013}. \subsection{Character-level Convolutional Neural Network} In our model, the input at time $t$ is an output from a character-level convolutional neural network (CharCNN), which we describe in this section. CNNs \cite{LeCun1989} have achieved state-of-the-art results on computer vision \cite{Krizhevsky2012} and have also been shown to be effective for various NLP tasks \cite{Collobert2011}. Architectures employed for NLP applications differ in that they typically involve temporal rather than spatial convolutions. Let $\mathcal{C}$ be the vocabulary of characters, $d$ be the dimensionality of character embeddings,\footnote{Given that $|\mathcal{C}|$ is usually small, some authors work with one-hot representations of characters. However we found that using lower dimensional representations of characters (i.e. $d < |\mathcal{C}|$) performed slightly better.} and $\mathbf{Q} \in \mathbb{R}^{d \times |\mathcal{C}|}$ be the matrix character embeddings. Suppose that word $k \in \mathcal{V}$ is made up of a sequence of characters $[c_1, \dots, c_{l}]$, where $l$ is the length of word $k$. Then the character-level representation of $k$ is given by the matrix $\mathbf{C}^k \in \mathbb{R}^{d \times l}$, where the $j$-th column corresponds to the character embedding for $c_j$ (i.e. the $c_j$-th column of $\mathbf{Q}$).\footnote{Two technical details warrant mention here: (1) we append start-of-word and end-of-word characters to each word to better represent prefixes and suffixes and hence $\mathbf{C}^k$ actually has $l + 2$ columns; (2) for batch processing, we zero-pad $\mathbf{C}^k$ so that the number of columns is constant (equal to the max word length) for all words in $\mathcal{V}$.} We apply a narrow convolution between $\mathbf{C}^k$ and a {\em filter} (or {\em kernel}) $\mathbf{H} \in \mathbb{R}^{d \times w}$ of width $w$, after which we add a bias and apply a nonlinearity to obtain a {\em feature map} $\mathbf{f}^k \in \mathbb{R}^{l - w + 1}$. Specifically, the $i$-th element of $\mathbf{f}^k$ is given by: \begin{equation} \mathbf{f}^k[i] = \mbox{tanh}(\langle \mathbf{C}^k[\ast, i:i+w-1] , \mathbf{H} \rangle + b) \end{equation} where $\mathbf{C}^k[\ast, i:i+w-1]$ is the $i$-to-$(i+w-1)$-th column of $\mathbf{C}^k$ and $\langle \mathbf{A}, \mathbf{B} \rangle = \Tr(\mathbf{A}\mathbf{B}^T)$ is the Frobenius inner product. Finally, we take the {\em max-over-time} \begin{equation} y^k = \max_{i} \mathbf{f}^k[i] \end{equation} as the feature corresponding to the filter $\mathbf{H}$ (when applied to word $k$). The idea is to capture the most important feature---the one with the highest value---for a given filter. A filter is essentially picking out a character $n$-gram, where the size of the $n$-gram corresponds to the filter width. We have described the process by which {\em one} feature is obtained from {\em one} filter matrix. Our CharCNN uses multiple filters of varying widths to obtain the feature vector for $k$. So if we have a total of $h$ filters $\mathbf{H}_1, \dots, \mathbf{H}_h$, then $\mathbf{y}^k = [y^k_1, \dots, y^k_h]$ is the input representation of $k$. For many NLP applications $h$ is typically chosen to be in $[100,1000]$. \subsection{Highway Network} We could simply replace $\mathbf{x}^k$ (the word embedding) with $\mathbf{y}^k$ at each $t$ in the RNN-LM, and as we show later, this simple model performs well on its own (Table~\ref{tab:highway}). One could also have a multilayer perceptron (MLP) over $\mathbf{y}^k$ to model interactions between the character $n$-grams picked up by the filters, but we found that this resulted in worse performance. Instead we obtained improvements by running $\mathbf{y}^k$ through a {\em highway network}, recently proposed by Srivastava et al. \shortcite{Srivastava2015}. Whereas one layer of an MLP applies an affine transformation followed by a nonlinearity to obtain a new set of features, \begin{equation} \mathbf{z} = g(\mathbf{W}\mathbf{y} + \mathbf{b}) \end{equation} one layer of a highway network does the following: \begin{equation} \mathbf{z} = \mathbf{t} \odot g(\mathbf{W}_H\mathbf{y} + \mathbf{b}_H) + (\mathbf{1} - \mathbf{t}) \odot \mathbf{y} \end{equation} where $g$ is a nonlinearity, $\mathbf{t} = \sigma(\mathbf{W}_T\mathbf{y} + \mathbf{b}_T)$ is called the {\em transform} gate, and $(\mathbf{1} - \mathbf{t})$ is called the {\em carry} gate. Similar to the memory cells in LSTM networks, highway layers allow for training of deep networks by adaptively {\em carrying} some dimensions of the input directly to the output.\footnote{Srivastava et al. \shortcite{Srivastava2015} recommend initializing $\mathbf{b}_T$ to a negative value, in order to militate the initial behavior towards {\em carry}. We initialized $\mathbf{b}_T$ to a small interval around $-2$.} By construction the dimensions of $\mathbf{y}$ and $\mathbf{z}$ have to match, and hence $\mathbf{W}_T$ and $\mathbf{W}_H$ are square matrices. \section{Experimental Setup} As is standard in language modeling, we use perplexity ($PPL$) to evaluate the performance of our models. Perplexity of a model over a sequence $ [w_1, \dots, w_T]$ is given by \begin{equation} PPL = \mbox{exp} \Big( \frac{NLL}{T} \Big) \end{equation} where $NLL$ is calculated over the test set. We test the model on corpora of varying languages and sizes (statistics available in Table~\ref{tab:corpus}). We conduct hyperparameter search, model introspection, and ablation studies on the English Penn Treebank (PTB) \cite{Marcus1993}, utilizing the standard training (0-20), validation (21-22), and test (23-24) splits along with pre-processing by \citeauthor{Mikolov2010} \shortcite{Mikolov2010}. With approximately $1$m tokens and $|\mathcal{V}|=10$k, this version has been extensively used by the language modeling community and is publicly available.\footnote{\url{http://www.fit.vutbr.cz/~imikolov/rnnlm/}} \begin{table}[!t] \center \tabcolsep 6.6pt \begin{tabular}{@{}lrrcrrc@{}} \toprule & \multicolumn{3}{c}{\textsc{Data-s}} & \multicolumn{3}{c}{\textsc{Data-l}} \\ \addlinespace & $|\mathcal{V}|$ & $|\mathcal{C}|$ & $T$ & $|\mathcal{V}|$ & $|\mathcal{C}|$ & $T$ \\ \midrule English (\textsc{En}) & $10$ k & $51$ & $1$ m & $60$ k & $197$ & $20$ m \\ Czech (\textsc{Cs}) & $46$ k & $101$ & $1$ m & $206$ k & $195$ & $17$ m \\ German (\textsc{De}) & $37$ k & $74$ & $1$ m & $339$ k & $260$ & $51$ m \\ Spanish (\textsc{Es}) & $27$ k & $72$ & $1$ m & $152$ k & $222$ & $56$ m \\ French (\textsc{Fr}) & $25$ k & $76$ & $1$ m & $137$ k & $225$ & $57$ m \\ Russian (\textsc{Ru}) & $62$ k & $62$ & $1$ m & $497$ k & $111$ & $25$ m \\ Arabic (\textsc{Ar}) & $86$ k & $132$ & $4$ m & -- \hspace{2.5mm} & -- \hspace{1mm} & -- \\ \bottomrule \end{tabular} \caption{Corpus statistics. $|\mathcal{V}| =$ word vocabulary size; $|\mathcal{C}| =$ character vocabulary size; $T = $ number of tokens in training set. The small English data is from the Penn Treebank and the Arabic data is from the News-Commentary corpus. The rest are from the 2013 ACL Workshop on Machine Translation. $|\mathcal{C}|$ is large because of (rarely occurring) special characters.} \label{tab:corpus} \end{table} With the optimal hyperparameters tuned on PTB, we apply the model to various morphologically rich languages: Czech, German, French, Spanish, Russian, and Arabic. Non-Arabic data comes from the 2013 ACL Workshop on Machine Translation,\footnote{\url{http://www.statmt.org/wmt13/translation-task.html}} and we use the same train/validation/test splits as in \citeauthor{Botha2014} \shortcite{Botha2014}. While the raw data are publicly available, we obtained the preprocessed versions from the authors,\footnote{\url{http://bothameister.github.io/}} whose morphological NLM serves as a baseline for our work. We train on both the small datasets (\textsc{Data-s}) with $1$m tokens per language, and the large datasets (\textsc{Data-l}) including the large English data which has a much bigger $|\mathcal{V}|$ than the PTB. Arabic data comes from the News-Commentary corpus,\footnote{\url{http://opus.lingfil.uu.se/News-Commentary.php}} and we perform our own preprocessing and train/validation/test splits. In these datasets only singleton words were replaced with \texttt{\small <}\textsf{\small unk}\texttt{\small >} and hence we effectively use the full vocabulary. It is worth noting that the character model can utilize surface forms of OOV tokens (which were replaced with \texttt{\small <}\textsf{\small unk}\texttt{\small >}), but we do not do this and stick to the preprocessed versions (despite disadvantaging the character models) for exact comparison against prior work. \subsection{Optimization} The models are trained by truncated backpropagation through time \cite{Werbos1990,Graves2013}. We backpropagate for $35$ time steps using stochastic gradient descent where the learning rate is initially set to $1.0$ and halved if the perplexity does not decrease by more than $1.0$ on the validation set after an epoch. On \textsc{Data-s} we use a batch size of $20$ and on \textsc{Data-l} we use a batch size of $100$ (for greater efficiency). Gradients are averaged over each batch. We train for $25$ epochs on non-Arabic and $30$ epochs on Arabic data (which was sufficient for convergence), picking the best performing model on the validation set. Parameters of the model are randomly initialized over a uniform distribution with support $[-0.05, 0.05]$. For regularization we use dropout \cite{Hinton2012} with probability $0.5$ on the LSTM input-to-hidden layers (except on the initial Highway to LSTM layer) and the hidden-to-output softmax layer. We further constrain the norm of the gradients to be below $5$, so that if the $L_2$ norm of the gradient exceeds $5$ then we renormalize it to have $||\cdot|| = 5$ before updating. The gradient norm constraint was crucial in training the model. These choices were largely guided by previous work of Zaremba et al. \shortcite{Zaremba2014} on word-level language modeling with LSTMs. \begin{table}[!t] \center \begin{tabular}{llll} \toprule \multicolumn{2}{c}{} & \multicolumn{1}{c}{Small} & \multicolumn{1}{c}{Large} \\ \midrule \multirow{4}{*}{CNN} & $d$ & $15$ & $15$ \\ & $w$ & $[1,2,3,4,5,6]$ & $[1,2,3,4,5,6,7]$ \\ & $h$ & $[25\cdot w]$ & $[\mbox{min} \{200, 50\cdot w\}]$ \\ & $f$ & tanh & tanh \\ \midrule \multirow{2}{*}{Highway} & $l$ & 1 & 2 \\ & $g$ & ReLU & ReLU \\ \midrule \multirow{2}{*}{LSTM} & $l$ & 2 & 2 \\ & $m$ & 300 & 650 \\ \bottomrule \end{tabular} \caption{Architecture of the small and large models. $d = $ dimensionality of character embeddings; $w =$ filter widths; $h = $ number of filter matrices, as a function of filter width (so the large model has filters of width $[1,2,3,4,5,6,7]$ of size $[50,100,150,200,200,200,200]$ for a total of $1100$ filters); $f,g = $ nonlinearity functions; $l =$ number of layers; $m = $ number of hidden units.} \label{tab:hyper} \end{table} Finally, in order to speed up training on \textsc{Data-l} we employ a hierarchical softmax \cite{Morin2005}---a common strategy for training language models with very large $|\mathcal{V}|$---instead of the usual softmax. We pick the number of clusters $c = \lceil \sqrt{|\mathcal{V}|} \rceil$ and randomly split $\mathcal{V}$ into mutually exclusive and collectively exhaustive subsets $\mathcal{V}_1, \dots, \mathcal{V}_c$ of (approximately) equal size.\footnote{While Brown clustering/frequency-based clustering is commonly used in the literature (e.g. \citeauthor{Botha2014} \shortcite{Botha2014} use Brown clusering), we used random clusters as our implementation enjoys the best speed-up when the number of words in each cluster is approximately equal. We found random clustering to work surprisingly well.} Then $\mbox{Pr}(w_{t+1} = j|w_{1:t})$ becomes, \begin{equation} \begin{split} \mbox{Pr}(w_{t+1} = j|w_{1:t}) &= \frac{\mbox{exp}(\mathbf{h}_t \cdot \mathbf{s}^r + t^r)}{\sum_{r'=1}^c \mbox{exp}(\mathbf{h}_t \cdot \mathbf{s}^{r'} + t^{r'})} \\ &\times \frac{\mbox{exp}(\mathbf{h}_t \cdot \mathbf{p}^j_r + q^j_r)}{\sum_{j' \in \mathcal{V}_r} \mbox{exp}(\mathbf{h}_t \cdot \mathbf{p}^{j'}_r + q^{j'}_r)} \end{split} \end{equation} where $r$ is the cluster index such that $j \in \mathcal{V}_r$. The first term is simply the probability of picking cluster $r$, and the second term is the probability of picking word $j$ given that cluster $r$ is picked. We found that hierarchical softmax was not necessary for models trained on \textsc{Data-s}. \section{Results} \subsection{English Penn Treebank} \begin{table}[!t] \center \begin{tabular}{lrr} \toprule & $PPL$ & Size \\ \midrule LSTM-Word-Small & $97.6$ & $5$ m\\ LSTM-Char-Small & $92.3$ & $5$ m\\ LSTM-Word-Large & $85.4$ & $20$ m\\ LSTM-Char-Large & $78.9$ & $19$ m\\ \midrule KN-$5$ (Mikolov et al. 2012) & $141.2$ & $2$ m\\ RNN$^\dagger$ (Mikolov et al. 2012)& $124.7$ & $6$ m \\ RNN-LDA$^\dagger$ (Mikolov et al. 2012) & $113.7$ & $7$ m\\ genCNN$^\dagger$ \cite{Wang2015} & $116.4$ & $8$ m \\ FOFE-FNNLM$^\dagger$ \cite{Shang2015} & $108.0$ & $6$ m\\ Deep RNN \cite{Pascanu2013} & $107.5$ & $6$ m\\ Sum-Prod Net$^\dagger$ \cite{Cheng2014} & $100.0$ & $5$ m\\ LSTM-1$^\dagger$ (Zaremba et al. 2014) & $82.7$ & $20$ m\\ LSTM-2$^\dagger$ (Zaremba et al. 2014) & $78.4$ & $52$ m \\ \bottomrule \end{tabular} \caption{Performance of our model versus other neural language models on the English Penn Treebank test set. $PPL$ refers to perplexity (lower is better) and size refers to the approximate number of parameters in the model. KN-$5$ is a Kneser-Ney $5$-gram language model which serves as a non-neural baseline. $^\dagger$For these models the authors did not explicitly state the number of parameters, and hence sizes shown here are estimates based on our understanding of their papers or private correspondence with the respective authors.} \label{tab:ptb} \end{table} We train two versions of our model to assess the trade-off between performance and size. Architecture of the small (LSTM-Char-Small) and large (LSTM-Char-Large) models is summarized in Table~\ref{tab:hyper}. As another baseline, we also train two comparable LSTM models that use word embeddings only (LSTM-Word-Small, LSTM-Word-Large). LSTM-Word-Small uses $200$ hidden units and LSTM-Word-Large uses $650$ hidden units. Word embedding sizes are also $200$ and $650$ respectively. These were chosen to keep the number of parameters similar to the corresponding character-level model. As can be seen from Table~\ref{tab:ptb}, our large model is on par with the existing state-of-the-art (Zaremba et al. 2014), despite having approximately $60\%$ fewer parameters. Our small model significantly outperforms other NLMs of similar size, even though it is penalized by the fact that the dataset already has OOV words replaced with \texttt{\small <}\textsf{\small unk}\texttt{\small >} (other models are purely word-level models). While lower perplexities have been reported with model ensembles \cite{Mikolov2012a}, we do not include them here as they are not comparable to the current work. \subsection{Other Languages} The model's performance on the English PTB is informative to the extent that it facilitates comparison against the large body of existing work. However, English is relatively simple from a morphological standpoint, and thus our next set of results (and arguably the main contribution of this paper) is focused on languages with richer morphology (Table~\ref{tab:others}, Table~\ref{tab:others2}). \begin{table}[!t] \center \tabcolsep 6pt \begin{tabular}{@{}llcccccc@{}} \toprule & & \multicolumn{6}{c}{\textsc{Data-s}} \\ \addlinespace && \textsc{Cs} & \textsc{De} & \textsc{Es} & \textsc{Fr} & \textsc{Ru} & \textsc{Ar} \\ \midrule \multirow{2}{*}{Botha} & KN-$4$ & $545$ & $366$ & $241$ & $274$ & $396$ & $323$ \\ & MLBL & $465$ & $296$ & $200$ & $225$ & $304$ & -- \\ \midrule \multirow{3}{*}{Small}& Word & $503$ & $305$ & $212$ & $229$ & $352$ & $216$ \\ & Morph & $414$ & $278$ & $197$ & $216$ & $290$ & $230$ \\ & Char & $401$ & $260$ & $182$ & $189$ & $278$ & $196$ \\ \addlinespace \multirow{3}{*}{Large}& Word & $493$ & $286$ & $200$ & $222$ & $357$ & $172$ \\ & Morph & $398$ & $263$ & $177$ & $196$ & $271$ & $\mathbf{148}$ \\ & Char &$\mathbf{371}$ & $\mathbf{239}$ &$\mathbf{165}$& $\mathbf{184}$ & $\mathbf{261}$ & $\mathbf{148}$ \\ \bottomrule \end{tabular} \caption{Test set perplexities for \textsc{Data-s}. First two rows are from \citeauthor{Botha2014b} \shortcite{Botha2014b} (except on Arabic where we trained our own KN-$4$ model) while the last six are from this paper. KN-$4$ is a Kneser-Ney $4$-gram language model, and MLBL is the best performing morphological logbilinear model from Botha \shortcite{Botha2014b}. Small/Large refer to model size (see Table~\ref{tab:hyper}), and Word/Morph/Char are models with words/morphemes/characters as inputs respectively. } \label{tab:others} \end{table} We compare our results against the morphological log-bilinear (MLBL) model from \citeauthor{Botha2014} \shortcite{Botha2014}, whose model also takes into account subword information through morpheme embeddings that are summed at the input and output layers. As comparison against the MLBL models is confounded by our use of LSTMs---widely known to outperform their feed-forward/log-bilinear cousins---we also train an LSTM version of the morphological NLM, where the input representation of a word given to the LSTM is a summation of the word's morpheme embeddings. Concretely, suppose that $\mathcal{M}$ is the set of morphemes in a language, $\mathbf{M} \in\mathbb{R}^{n \times |\mathcal{M}|}$ is the matrix of morpheme embeddings, and $\mathbf{m}^j$ is the $j$-th column of $\mathbf{M}$ (i.e. a morpheme embedding). Given the input word $k$, we feed the following representation to the LSTM: \begin{equation} \mathbf{x}^k + \sum_{j \in \mathcal{M}_k} \mathbf{m}^j \end{equation} where $\mathbf{x}^k$ is the word embedding (as in a word-level model) and $\mathcal{M}_k \subset \mathcal{M}$ is the set of morphemes for word $k$. The morphemes are obtained by running an unsupervised morphological tagger as a preprocessing step.\footnote{We use {\em Morfessor Cat-MAP} \cite{Creutz2007}, as in \citeauthor{Botha2014} \shortcite{Botha2014}.} We emphasize that the word embedding itself (i.e. $\mathbf{x}^k$) is added on top of the morpheme embeddings, as was done in Botha and Blunsom \shortcite{Botha2014}. The morpheme embeddings are of size $200$/$650$ for the small/large models respectively. We further train word-level LSTM models as another baseline. On \textsc{Data-s} it is clear from Table~\ref{tab:others} that the character-level models outperform their word-level counterparts despite, again, being smaller.\footnote{The difference in parameters is greater for non-PTB corpora as the size of the word model scales faster with $|\mathcal{V}|$. For example, on Arabic the small/large word models have $35$m/$121$m parameters while the corresponding character models have $29$m/$69$m parameters respectively.} The character models also outperform their morphological counterparts (both MLBL and LSTM architectures), although improvements over the morphological LSTMs are more measured. Note that the morpheme models have strictly more parameters than the word models because word embeddings are used as part of the input. Due to memory constraints\footnote{All models were trained on GPUs with 2GB memory.} we only train the small models on \textsc{Data-l} (Table~\ref{tab:others2}). Interestingly we do not observe significant differences going from word to morpheme LSTMs on Spanish, French, and English. The character models again outperform the word/morpheme models. We also observe significant perplexity reductions even on English when $\mathcal{V}$ is large. We conclude this section by noting that we used the same architecture for all languages and did not perform any language-specific tuning of hyperparameters. \begin{table}[!t] \center \tabcolsep 6pt \begin{tabular}{@{}llcccccc@{}} \toprule & & \multicolumn{6}{c}{\textsc{Data-l}} \\ \addlinespace & & \textsc{Cs} & \textsc{De} & \textsc{Es} & \textsc{Fr} & \textsc{Ru} & \textsc{En} \\ \midrule \multirow{2}{*}{Botha} & KN-$4$ & $862$ & $463$ & $219$ & $243$ & $390$ & $291$ \\ & MLBL & $643$ & $404$ & $203$ & $227$ & $\mathbf{300}$ & $273$ \\ \midrule \multirow{3}{*}{Small} & Word & $701$ & $347$ & $186$ & $202$ & $353$ & $236$ \\ & Morph & $615$ & $331$ & $189$ & $209$ & $331$ & $233$ \\ & Char & $\mathbf{578}$ & $\mathbf{305}$ & $\mathbf{169}$ & $\mathbf{190}$ & $313$ & $\mathbf{216}$ \\ \bottomrule \end{tabular} \caption{Test set perplexities on \textsc{Data-l}. First two rows are from \citeauthor{Botha2014b} \shortcite{Botha2014b}, while the last three rows are from the small LSTM models described in the paper. KN-$4$ is a Kneser-Ney $4$-gram language model, and MLBL is the best performing morphological logbilinear model from \citeauthor{Botha2014b} \shortcite{Botha2014b}. Word/Morph/Char are models with words/morphemes/characters as inputs respectively.} \label{tab:others2} \end{table} \begin{table*}[!t] \center \small \em \begin{tabular}{ccccccccc} \toprule & \multicolumn{5}{c}{{\em {\normalsize In Vocabulary}}} & \multicolumn{3}{c}{{\em {\normalsize Out-of-Vocabulary}}} \\ \addlinespace & while & his & you & richard & trading & computer-aided & misinformed & looooook \\ \cmidrule(lr){2-6} \cmidrule(lr){7-9} \multirow{4}{*}{{\em {\normalsize LSTM-Word}}} & although & your & conservatives & jonathan & advertised & -- & -- & --\\ & letting & her & we & robert & advertising & -- & -- & -- \\ & though & my & guys & neil & turnover & -- & -- & --\\ & minute & their & i & nancy & turnover & -- & -- & --\\ \addlinespace \addlinespace & chile & this & your & hard & heading & computer-guided & informed & look \\ {\em {\normalsize LSTM-Char}} & whole & hhs & young & rich& training & computerized & performed & cook\\ {\em {\normalsize (before highway)}} & meanwhile & is & four & richer & reading & disk-drive & transformed & looks\\ & white & has & youth & richter & leading & computer & inform & shook \\ \addlinespace \addlinespace & meanwhile & hhs & we & eduard & trade & computer-guided & informed & look\\ {\em {\normalsize LSTM-Char}} & whole & this & your & gerard & training & computer-driven & performed & looks \\ {\em {\normalsize (after highway)}} & though & their & doug & edward & traded & computerized& outperformed & looked \\ & nevertheless & your & i & carl & trader & computer & transformed & looking \\ \bottomrule \end{tabular} \caption{Nearest neighbor words (based on cosine similarity) of word representations from the large word-level and character-level (before and after highway layers) models trained on the PTB. Last three words are OOV words, and therefore they do not have representations in the word-level model.} \label{tab:nn} \end{table*} \section{Discussion} \subsection{Learned Word Representations} We explore the word representations learned by the models on the PTB. Table~\ref{tab:nn} has the nearest neighbors of word representations learned from both the word-level and character-level models. For the character models we compare the representations obtained before and after highway layers. Before the highway layers the representations seem to solely rely on surface forms---for example the nearest neighbors of {\em you} are {\em your, young, four, youth}, which are close to {\em you} in terms of edit distance. The highway layers however, seem to enable encoding of semantic features that are not discernable from orthography alone. After highway layers the nearest neighbor of {\em you} is {\em we}, which is orthographically distinct from {\em you}. Another example is {\em while} and {\em though}---these words are far apart edit distance-wise yet the composition model is able to place them near each other. The model also makes some clear mistakes (e.g. {\em his} and {\em hhs}), highlighting the limits of our approach, although this could be due to the small dataset. The learned representations of OOV words ({\em computer-aided}, {\em misinformed}) are positioned near words with the same part-of-speech. The model is also able to correct for incorrect/non-standard spelling ({\em looooook}), indicating potential applications for text normalization in noisy domains. \subsection{Learned Character $N$-gram Representations} As discussed previously, each filter of the CharCNN is essentially learning to detect particular character $n$-grams. Our initial expectation was that each filter would learn to activate on different morphemes and then build up semantic representations of words from the identified morphemes. However, upon reviewing the character $n$-grams picked up by the filters (i.e. those that maximized the value of the filter), we found that they did not (in general) correspond to valid morphemes. To get a better intuition for what the character composition model is learning, we plot the learned representations of all character $n$-grams (that occurred as part of at least two words in $\mathcal{V}$) via principal components analysis (Figure~\ref{fig:pca}). We feed each character $n$-gram into the CharCNN and use the CharCNN's output as the fixed dimensional representation for the corresponding character $n$-gram. As is apparent from Figure~\ref{fig:pca}, the model learns to differentiate between prefixes (red), suffixes (blue), and others (grey). We also find that the representations are particularly sensitive to character $n$-grams containing hyphens (orange), presumably because this is a strong signal of a word's part-of-speech. \begin{figure}[!t] \center \includegraphics[scale=0.45]{pca.png} \caption{Plot of character $n$-gram representations via PCA for English. Colors correspond to: prefixes (red), suffixes (blue), hyphenated (orange), and all others (grey). Prefixes refer to character $n$-grams which start with the start-of-word character. Suffixes likewise refer to character $n$-grams which end with the end-of-word character.} \label{fig:pca} \end{figure} \subsection{Highway Layers} We quantitatively investigate the effect of highway network layers via ablation studies (Table~\ref{tab:highway}). We train a model without any highway layers, and find that performance decreases significantly. As the difference in performance could be due to the decrease in model size, we also train a model that feeds $\mathbf{y}^k$ (i.e. word representation from the CharCNN) through a one-layer multilayer perceptron (MLP) to use as input into the LSTM. We find that the MLP does poorly, although this could be due to optimization issues. We hypothesize that highway networks are especially well-suited to work with CNNs, adaptively combining local features detected by the individual filters. CNNs have already proven to be been successful for many NLP tasks \cite{Collobert2011,Shen2014,Kalchbrenner2014,Kim2014,Zhang2015,Lei2015}, and we posit that further gains could be achieved by employing highway layers on top of existing CNN architectures. We also anecdotally note that (1) having one to two highway layers was important, but more highway layers generally resulted in similar performance (though this may depend on the size of the datasets), (2) having more convolutional layers before max-pooling did not help, and (3) highway layers did not improve models that only used word embeddings as inputs. \begin{table}[!t] \center \begin{tabular}{lrc} \toprule & \multicolumn{2}{c}{LSTM-Char} \\ \addlinespace & Small & Large \\ \midrule No Highway Layers & $100.3$ & $84.6$ \\ One Highway Layer & $92.3$ & $79.7$\\ Two Highway Layers & $90.1$ & $78.9$\\ One MLP Layer & $111.2$ &$92.6$ \\ \bottomrule \end{tabular} \caption{Perplexity on the Penn Treebank for small/large models trained with/without highway layers.} \label{tab:highway} \end{table} \subsection{Effect of Corpus/Vocab Sizes} We next study the effect of training corpus/vocabulary sizes on the relative performance between the different models. We take the German (\textsc{De}) dataset from \textsc{Data-l} and vary the training corpus/vocabulary sizes, calculating the perplexity reductions as a result of going from a small word-level model to a small character-level model. To vary the vocabulary size we take the most frequent $k$ words and replace the rest with \texttt{\small <}\textsf{\small unk}\texttt{\small >}. As with previous experiments the character model does not utilize surface forms of \texttt{\small <}\textsf{\small unk}\texttt{\small >} and simply treats it as another token. Although Table~\ref{tab:relperf} suggests that the perplexity reductions become less pronounced as the corpus size increases, we nonetheless find that the character-level model outperforms the word-level model in all scenarios. \begin{table}[!t] \center \begin{tabular}{crrrrc} \toprule && \multicolumn{4}{c}{$|\mathcal{V}|$} \\ \addlinespace && $10$ k & $25$ k &$50$ k & $100$ k \\ \midrule \multirow{4}{*}{$T$} &$1$ m & $17\%$ & $16\%$ &$21\%$ & --\\ &$5$ m & $8\%$ & $14\%$ & $16\%$ & $21\%$\\ &$10$ m & $9\%$ & $9\%$ & $12\%$& $15\%$\\ &$25$ m & $9\%$ & $8\%$ & $9\%$& $10\%$\\ \bottomrule \end{tabular} \caption{Perplexity reductions by going from small word-level to character-level models based on different corpus/vocabulary sizes on German (\textsc{De}). $|\mathcal{V}|$ is the vocabulary size and $T$ is the number of tokens in the training set. The full vocabulary of the $1$m dataset was less than $100$k and hence that scenario is unavailable.} \label{tab:relperf} \end{table} \subsection{Further Observations} We report on some further experiments and observations: \begin{itemize} \item Combining word embeddings with the CharCNN's output to form a combined representation of a word (to be used as input to the LSTM) resulted in slightly worse performance ($81$ on PTB with a large model). This was surprising, as improvements have been reported on part-of-speech tagging \cite{Santos2014a} and named entity recognition \cite{Santos2015} by concatenating word embeddings with the output from a character-level CNN. While this could be due to insufficient experimentation on our part,\footnote{We experimented with (1) concatenation, (2) tensor products, (3) averaging, and (4) adaptive weighting schemes whereby the model learns a convex combination of word embeddings and the CharCNN outputs.} it suggests that for some tasks, word embeddings are superfluous---character inputs are good enough. \item While our model requires additional convolution operations over characters and is thus slower than a comparable word-level model which can perform a simple lookup at the input layer, we found that the difference was manageable with optimized GPU implementations---for example on PTB the large character-level model trained at $1500$ tokens/sec compared to the word-level model which trained at $3000$ tokens/sec. For scoring, our model can have the same running time as a pure word-level model, as the CharCNN's outputs can be pre-computed for all words in $\mathcal{V}$. This would, however, be at the expense of increased model size, and thus a trade-off can be made between run-time speed and memory (e.g. one could restrict the pre-computation to the most frequent words). \end{itemize} \section{Related Work} Neural Language Models (NLM) encompass a rich family of neural network architectures for language modeling. Some example architectures include feed-forward \cite{Bengio2003}, recurrent \cite{Mikolov2010}, sum-product \cite{Cheng2014}, log-bilinear \cite{Mnih2007}, and convolutional \cite{Wang2015} networks. In order to address the rare word problem, \citeauthor{Alexandrescu2006} \shortcite{Alexandrescu2006}---building on analogous work on count-based $n$-gram language models by Bilmes and Kirchhoff \shortcite{Bilmes2003}---represent a word as a set of shared factor embeddings. Their Factored Neural Language Model (FNLM) can incorporate morphemes, word shape information (e.g. capitalization) or any other annotation (e.g. part-of-speech tags) to represent words. A specific class of FNLMs leverages morphemic information by viewing a word as a function of its (learned) morpheme embeddings \cite{Luong2013,Botha2014,Qui2014}. For example \citeauthor{Luong2013} \shortcite{Luong2013} apply a recursive neural network over morpheme embeddings to obtain the embedding for a single word. While such models have proved useful, they require morphological tagging as a preprocessing step. Another direction of work has involved purely character-level NLMs, wherein both input and output are characters \cite{Sutskever2011,Graves2013}. Character-level models obviate the need for morphological tagging or manual feature engineering, and have the attractive property of being able to generate novel words. However they are generally outperformed by word-level models \cite{Mikolov2012b}. Outside of language modeling, improvements have been reported on part-of-speech tagging \cite{Santos2014a} and named entity recognition \cite{Santos2015} by representing a word as a concatenation of its word embedding and an output from a character-level CNN, and using the combined representation as features in a Conditional Random Field (CRF). \citeauthor{Zhang2015} \shortcite{Zhang2015} do away with word embeddings completely and show that for text classification, a deep CNN over characters performs well. \citeauthor{Ballesteros2015} \shortcite{Ballesteros2015} use an RNN over characters only to train a transition-based parser, obtaining improvements on many morphologically rich languages. Finally, \citeauthor{Ling2015} \shortcite{Ling2015} apply a bi-directional LSTM over characters to use as inputs for language modeling and part-of-speech tagging. They show improvements on various languages (English, Portuguese, Catalan, German, Turkish). It remains open as to which character composition model (i.e. CNN or LSTM) performs better. \section{Conclusion} We have introduced a neural language model that utilizes only character-level inputs. Predictions are still made at the word-level. Despite having fewer parameters, our model outperforms baseline models that utilize word/morpheme embeddings in the input layer. Our work questions the necessity of word embeddings (as inputs) for neural language modeling. Analysis of word representations obtained from the character composition part of the model further indicates that the model is able to encode, from characters only, rich semantic and orthographic features. Using the CharCNN and highway layers for representation learning (e.g. as input into \textsf{\small word2vec} \cite{Mikolov2013a}) remains an avenue for future work. Insofar as sequential processing of words as inputs is ubiquitous in natural language processing, it would be interesting to see if the architecture introduced in this paper is viable for other tasks---for example, as an encoder/decoder in neural machine translation \cite{Cho2014,Sutskever2014}. \section*{Acknowledgments} We are especially grateful to Jan Botha for providing the preprocessed datasets and the model results. \bibliographystyle{aaai} \small
{ "redpajama_set_name": "RedPajamaArXiv" }
7,600
Siebenhandl ist der Familienname folgender Personen: Jörg Siebenhandl (* 1990), österreichischer Fußballtorwart Udo Siebenhandl (* 1987), österreichischer Fußballtorwart
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,222
Trio of Columbian Wyandottes, rare variety Heritage breed, dual purpose great free range birds and very good temperament. These are 4 generation outcross to Blue Wyandottes, so a possibility of a very small number of Blue, Black or Blue Columbians cropping up in some hatches. Last edited by Ontario Chick on Wed Apr 04, 2018 12:56 pm, edited 2 times in total. Those are beautiful OC!!! it might be a good thing you live so far away!
{ "redpajama_set_name": "RedPajamaC4" }
3,015
NIST is one of the top safety training across Asia which offers safety courses like Nebosh, IOSH, CIEH, Food safety. Bangalore city is amongst the top ten preferred entrepreneurial locations in the world to hold a NEBOSH qualification and get world opportunities. Register before 20th January 2016 to get special offer on Nebosh IGC – Bangalore. C.M.H Road, Indiranagar, Bangalore – 560038.
{ "redpajama_set_name": "RedPajamaC4" }
4,091
Dschahrom () ist eine Stadt in der Provinz Fars im Iran. Sie befindet sich 190 km südöstlich von Schiras. Die Stadt findet in Form von Zarham erstmals in dem auf mittelpersisch verfassten Versroman Kārnāmak-i Artaxšer-i Pāpākān Erwähnung, der die Regentschaft des Herrschers Ardaschir I. zum Gegenstand hat, und wird ferner mehrmals im Schāhnāme von Firdausi genannt. Vermutlich bedeutet der Name "Grünes Land". Die hauptsächlich landwirtschaftlich ausgerichtete Stadt mit ihrer tropischen und subtropischen Vegetation produziert heute vor allem Datteln, Zitrusfrüchte und Weizen. Außerdem werden hier Teppiche hergestellt. Südlich von Dschahrom befindet sich die Sang-Schekan-Höhle. Hochschulen und Universitäten Die Stadt verfügt über drei Universitäten: Medizinische Universität von Dschahrom Islamische Universität von Dschahrom Payām-e Nūr Dschahrom Persönlichkeiten Bārbad, sassanidischer Musiker und Dichter am Hofe Chosrou Parwiz Kāmel Dschahromi, Dichter Abdol Dschabar Farami, Dichter Abu Tāleb-e Tofān, Dichter Dschalāl Tofān, Dichter Afschin Ghotbi, Fußballtrainer Ali Mohammad Bescharati (* 1945), Politiker Siehe auch Liste der Großstädte im Iran Einzelnachweise Ort in Fars Hochschul- oder Universitätsstadt
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,384
Computer viruses stop your computer functioning normally and requires a cure to get rid of. There are lots of different types of computer viruses and most of them are designed to harm your computer or steal private information. Computer Logic offers a complete virus removal to get your system back up and running. Computer Logic will scan your computer for any viruses, trojans, spyware and any other malicious programs and remove them so that your computer can operate correctly and with full capacity again. Computer Logic understands the value of your data and the stress you might feel when things go wrong. We can help you recover your lost data and and transfer the files to a device of your choice. Want to brush up on your computer skills? Computer Logic are offering basic computer lessons for all ages at affordable prices. Like our Facebook for more updates.
{ "redpajama_set_name": "RedPajamaC4" }
2,210
<?php namespace VehicleTest\TestingAdvice\Action; use Dvsa\Mot\ApiClient\Resource\Item\DvsaVehicle; use Dvsa\Mot\ApiClient\Resource\Item\VehicleTestingData\TestingAdvice; use Dvsa\Mot\ApiClient\Resource\Item\VehicleTestingData\TestingAdviceCategory; use Dvsa\Mot\ApiClient\Service\VehicleService; use Dvsa\Mot\ApiClient\Service\MotTestService; use DvsaCommonTest\Builder\DvsaVehicleBuilder; use DvsaCommonTest\TestUtils\XMock; use Vehicle\TestingAdvice\Action\DisplayAdviceAction; use Vehicle\TestingAdvice\ViewModel\DisplayAdviseViewModel; use Core\Action\ViewActionResult; use DvsaCommon\Configuration\MotConfig; class DisplayAdviceActionTest extends \Dvsa\Mot\Frontend\Test\TestCase { const BACK_LINK_URL = 'www.back-link.com'; const BACK_LINK_LABEL = 'Back to home'; const FEEDBACK_LINK = 'www.survey.com'; /* @var DisplayAdviceAction */ private $displayAction; private $testingAdvice; public function setUp() { $dvsaVehicleBuilder = new DvsaVehicleBuilder(); $vehicleStd = $dvsaVehicleBuilder->getEmptyVehicleStdClass(); $vehicleStd->id = 15; $vehicleStd->vehicleClass = new \stdClass(); $category = new TestingAdviceCategory(); $category->setName('The Automotive Engine'); $category->setContents(['Most common engines have 4, 6, or 8 pistons', 'The crankshaft is connected to the pistons']); $advice = new TestingAdvice(); $advice->setCategories([$category]); $this->testingAdvice = $advice; $vehicleService = XMock::of(VehicleService::class); $vehicleService->method('getDvsaVehicleById')->willReturn(new DvsaVehicle($vehicleStd)); $vehicleService->method('getTestingAdvice')->willReturn($advice); $motConfig = new MotConfig(['testing_advice_survey_link' => self::FEEDBACK_LINK]); $this->displayAction = new DisplayAdviceAction( $vehicleService, XMock::of(MotTestService::class), $motConfig ); } public function test_execute_returnsActionResult() { $breadcrumbs = ['bread' => '', 'crumbs' => '']; $actionResult = $this->displayAction->execute(1, self::BACK_LINK_URL, self::BACK_LINK_LABEL, $breadcrumbs); $this->assertInstanceOf(ViewActionResult::class, $actionResult); /** @var DisplayAdviseViewModel $viewModel */ $viewModel = $actionResult->getViewModel(); $this->assertEquals(self::BACK_LINK_LABEL, $viewModel->getBackLinkLabel()); $this->assertEquals(self::BACK_LINK_URL, $viewModel->getBackLinkUrl()); $this->assertEquals(self::FEEDBACK_LINK, $viewModel->getFeedbackLink()); $this->assertEquals($this->testingAdvice, $viewModel->getTestingAdvice()); $this->assertEquals($breadcrumbs, $actionResult->layout()->getBreadcrumbs()); } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,447
var xhrRequest = function(url, type, callback) { var xhr = new XMLHttpRequest(); xhr.onload = function() { callback(this.responseText); }; xhr.open(type, url); xhr.send(); }; function getSensorData(pos) { // Sensor End point // @TODO add to config? var url = "http://www.4now.net/sasp/clavel/last/?uid=1,6,10,8,0,5,2,3"; // ALl sensors //var url = "http://www.4now.net//sasp/clavel/last/?uid=1,6,10,8,2,3,4,7,0,5,11,9"; // Send request to 4now net xhrRequest(url, 'GET', function(responseText) { // responseText contains JSON object var data = JSON.parse(responseText); console.log("Received sensor data from 4now"); var location = ""; var value = ""; var stype = ""; var timestamp = Math.floor(Date.now() / 1000); var nb_sensors = data.length; for (var i = 0; i < nb_sensors; i++) { location += data[i].location_label.split(" ", 1)[0] + "|"; value += Math.round(data[i].value) + "|"; stype += data[i].type.charAt(0) + "|"; } console.log("Sending " + nb_sensors + " sensors data to the watch"); // Assemble dictionary using our keys var dictionary = { "KEY_TIMESTAMP": timestamp, "KEY_SENSOR_VALUE": value, "KEY_SENSOR_TYPE": stype, "KEY_SENSOR_LOCATION": location, }; // Send to Pebble Pebble.sendAppMessage(dictionary, function(e) { console.log("Sensor info sent to Pebble successfully!"); }, function(e) { console.log("Error sending sensor info to Pebble!"); } ); } ); } // Listen for when the watchface is opened Pebble.addEventListener('ready', function(e) { console.log("PebbleKit JS ready!"); // Get the initial data getSensorData(); } ); // Listen for when an AppMessage is received Pebble.addEventListener('appmessage', function(e) { console.log("AppMessage received!"); getSensorData(); } );
{ "redpajama_set_name": "RedPajamaGithub" }
1,527
{"url":"https:\/\/rpg.meta.stackexchange.com\/questions\/7681\/can-our-tag-prompt-nudge-toward-including-system","text":"Can our tag-prompt nudge toward including system?\n\nWe get questions every day that need to be put on hold while we wait for a new querent to specify what game\/edition they're playing. As of this writing I've seen three so far today--they get quickly closed, get a comment asking about system\/edition, and reopened if OP specifies.\n\nCurrently when asking a question one finds, below the text of the question, the following field:\n\nIt's clear from this message that one must provide at least one tag, and the system rejects a submission without any tag. It's also clear that the suggestions have come from our list of tags. (There's a question on meta.se about where the example tags come from; it's not clear to me the answer's very authoritative.)\n\nCan we make one of the suggested (greyed-out) tag suggestions be a system tag? I've got to assume it would help nudge people in the right direction if submnitting a question included the subtle hint that \"tell me what game you're playing\" might be helpful.\n\nPart 2: the ugly truth. It's usually D&D\/PF that's the problem. Should the \"suggested\" tag be one of the D&Ds? Is it 5e that's the worst offender, and should that be one of the provided suggestions? I ask because\n\n\u2022 I'm not good enough with SEDE to figure out which system tag--not that \"system tag\" is actually a thing in our software--tends to generate the most close-comment-edit-reopen cycles, and\n\u2022 I don't know user psychology enough to know if prompting toward the \"worst offender\" is most helpful--perhaps a near-neighbor is better?\n\u2022 Did we ever make any progress on this idea? Is it possible, from a technical standpoint, to customize our ask box to nudge users toward including the system their question is about? \u2013\u00a0Quadratic Wizard Jul 27 '18 at 16:35\n\u2022 I'm marking this as status-deferred. We'd like to do this but we can't reasonably do this right now. More information here: rpg.meta.stackexchange.com\/a\/9591 \u2013\u00a0doppelgreener Nov 13 '19 at 15:22\n\nSomething like this tag placeholder might work well:\n\ntag which game you're playing, if any (such as: dnd-5e, world-of-darkness), max 5 tags\n\nThis requests the single most important thing to know: the game they're playing. The format has changed from the original since we're not just suggesting a bundle of random tags, we're suggesting a list from which they might pick just one.\n\nIn this format, two tags should be picked from two different groups:\n\n\u2022 the first tag is one of [dnd-5e], [pathfinder], [dnd-3.5e]\n\u2022 the second tag is one of [world-of-darkness], [savage-worlds], [fate]\n\nThese represent the three most popular game tags inside and outside the D&D family.\n\nThis setup conveys how we specifically tag D&D games (two thirds of the time, at least) and it conveys that we service a range of RPGs, both within and outside the D&D family of games.\n\n\u2022 I would replace 'if any' with 'if relevant', I think. \u2013\u00a0Please stop being evil Jan 15 '18 at 23:05\n\u2022 @thedarkwanderer I'd strongly prefer \"if any\": almost always (like >90% of the time) if a game's being played it's substantially relevant, and an over-correction here is preferable to an under-correction. When a game is specified but not ultimately relevant, it's harmless and fine. When a game isn't specified, it impairs answer quality or is a total show-stopper. In the scope of this question and considering the potential impact, I'd be absolutely fine with the over-correction. \u2013\u00a0doppelgreener Jan 15 '18 at 23:20\n\u2022 Also, until 2015-ish we had lots of people asking questions & deciding their game wasn't relevant & not mentioning it at all or burying it. (Revision 1 of this Q from 2014 is a great example: spoilers, the game was utterly important.) Nowadays burying that info is widely seen as counterproductive and I'm concerned \"if relevant\" would prompt that behaviour again. Picture: Q: \"Hey how do I calculate damage for my sword? Game's not relevant I guess.\" \/\/ A: \"It's on page X of the PHB.\" \/\/ Comment: \"Oh, which Savage Worlds book is that?\" \u2013\u00a0doppelgreener Jan 15 '18 at 23:21\n\u2022 >.< ok, yeah, let's avoid people doing that. \u2013\u00a0Please stop being evil Jan 16 '18 at 6:40\n\u2022 I'd even drop the \"if any,\" and generically demand a system. I suspect it would be easier to handle the exceptions where it's misleading than the ones where it's omitted. \u2013\u00a0fectin Jan 17 '18 at 1:15\n\u2022 @fectin that's too far for me. We already have people who think every question needs a system tag (see the downvoted answer on this question). I don't want to encourage that sort of thinking. \u2013\u00a0Please stop being evil Jan 17 '18 at 3:50\n\u2022 @thedarkwanderer I wouldn't necessarily recommend requiring a system mechanically, or enforcing that requirement through moderation (user or diamond). But that is the first question on just about every post which isn't tagged with a system. And as we are collapsing down to a single-sentence instruction on tagging, my best guess is that including \"if any\" will produce initial tags that need work more often than omitting it. \u2013\u00a0fectin Jan 17 '18 at 4:02\n\u2022 @fectin It's true that that's among the first comments on any question not tagged with a system. That's a problem, and, among questions that aren't a user's first question on the site, it's not usually a problem with the question. The culture that leads to that question being asked even when it makes absolutely no sense with respect to the question, sometimes as a tounge-in-cheek reprimand for asking something other than how to handle a specific in-play situation (along the lines of 'what problem are you trying to solve?' on a history question) , is not something I want to encourage. \u2013\u00a0Please stop being evil Jan 17 '18 at 8:06\n\u2022 Given the existing no-guess policy, might want to use an all caps \"REQUIRED\" in the placeholder text about the system tag. \u2013\u00a0GcL Sep 24 '18 at 14:39\n\nAnother option is to provide guidance in the sidebar help box. Currently, it looks like this:\n\nWe prefer questions that can be answered, not just discussed.\n\n visit the help center \u00bb\n\n\nCould a line be added to this box to the effect of\n\nIf your question is about a specific system or edition, be sure to tag it as such (dnd-5e, dungeons-and-dragons, pathfinder).\n\nThis could even be in the \"How to Tag\" sidebar box that appears when editing the Tags box. Or in both, with \"How to Ask\" indicating that you should choose tags - \"Be sure to properly tag your question\" and \"How to Tag\" having the plea for system and edition info.\n\n\u2022 If this isn't possible to change, we could make a community ad that's a PSA reminding people to say or tag which game they're asking about. Not everyone will be shown it since they rotate, and some who are shown it won't notice it, but it'll catch a percentage. \u2013\u00a0SevenSidedDie Jul 13 '18 at 17:59\n\nNB: I no longer think that this is a good idea after more experience on this site\n\nI think this has its heart in the right place. But I'm also not in favour of forcing every question to have a system tag of some kind (even if it were technically viable): that just makes it a tag tax, and unnecessarily puts redundant tags on questions that don't need them. The idea is that not every question is about a system, so not every question needs a tag about system. \u2013 SevenSidedDie \u21b5 Jan 10 '18 at 14:44\n\nAlternate solution: require a system or system agnostic tag\n\nIt seems like a decent solution would be to require a system tag or the system agnostic tag to any new post before allowing it to be submitted.\n\nIt seems that as long as we were able to have a group of tags of which at least one was required to be present to make a new post then it would largely solve the issue (albeit in a very blunt fashion).\n\nHowever, I have no idea if such a thing is technically possible or viable in SE. Now that it is pointed out however, I do realize that this meta has a system just like I am talking about here.\n\nA downside I just realized would be that, because tags would have to be manually assigned as system (and thus required), first questions about a particular system would be difficult to handle.\n\n\u2022 We have something resembling this in the system: meta requires one of the four primary meta tags (support, discussion, feature-request, or bug). However a lot of our questions are not about systems, and I don't want to force them to pick a \"not-a-system\" tag. But then, I am against the necessity of using system-agnostic to begin with. \u2013\u00a0doppelgreener Jan 10 '18 at 14:01\n\u2022 Your downside's not a huge deal, IMO: it's really questions being asked about dnd5e, dnd3.5e, and PF that are the lion's share of the problem. \u2013\u00a0nitsua60 Jan 10 '18 at 14:21\n\u2022 I was asked in chat about the issue I had with system-agnostic. My reasoning is here -- it starts out as an explanation of my personal issues with the existence of the system-agnostic tag, and is then followed with my concerns of the problem that comes out of making a system tag or system-agnostic mandatory for every question. \u2013\u00a0doppelgreener Jan 10 '18 at 14:27\n\u2022 I think this has its heart in the right place. But I'm also not in favour of forcing every question to have a system tag of some kind (even if it were technically viable): that just makes it a tag tax, and unnecessarily puts redundant tags on questions that don't need them. The idea is that not every question is about a system, so not every question needs a tag about system. \u2013\u00a0SevenSidedDie Jan 10 '18 at 19:44\n\u2022 @SevenSidedDie your last sentence there is key \u2013\u00a0Wibbs Jan 11 '18 at 16:05","date":"2020-07-04 22:03:30","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3233627378940582, \"perplexity\": 1563.2527333383046}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-29\/segments\/1593655886706.29\/warc\/CC-MAIN-20200704201650-20200704231650-00403.warc.gz\"}"}
null
null
International Encyclopedia of the Social & Behavioral Sciences James D. Wright en Limba Engleză Carte Hardback – 02 Apr 2015 Fully revised and updated, the second edition of the International Encyclopedia of the Social and Behavioral Sciences, first published in 2001, offers a source of social and behavioral sciences reference material that is broader and deeper than any other. Available in both print and online editions, it comprises over 3,900 articles, commissioned by 71 Section Editors, and includes 90,000 bibliographic references as well as comprehensive name and subject indexes. Provides authoritative, foundational, interdisciplinary knowledge across the wide range of behavioral and social sciences fields Discusses history, current trends and future directions Topics are cross-referenced with related topics and each article highlights further reading Preț: 56063.72 lei Preț vechi: 62992.94 lei Puncte Express: 84096 11278.84€ • 12673.93$ • 10180.37£ Pagini: 23185 Dimensiuni: 1219 x 1016 x 368 mm Greutate: 95.71 kg Ediția: Revised Editura: ELSEVIER SCIENCE V-ar putea interesa The Limits to Growth: The 30-Year Update Donella H. Meadows Networks of Outrage and Hope: Social Movements in the Internet Age Manuel Castells Raising Kids in the 21st Century: The Science of Psychological Health for Children Sharon K. Hall Women, Family, and Work: Writings on the Economics of Gender Karine Moe Shorter Views Samuel R. Delany The Consumer Society Juliet Schor Peasants Against Globalization: Rural Social Movements in Costa Rica Marc Edelman Feminism and Criminology Ngaire Naffine Alexis de Tocqueville on Democracy, Revolution, and Society Alexis de Tocqueville Smart Mobs: The Next Social Revolution Howard Rheingold Sketch for a Self–Analysis Pierre Bourdieu The Blackwell Companion to Social Movements David A. Snow Social Movements: An Introduction Donatella Della Porta Asking Questions: The Definitive Guide to Questionnaire Design –– For Market Research, Political Polls, and Social and Health Questionnaires Norman M. Bradburn The Lonely Crowd – A Study of the Changing American Character David Riesman The Brave New World of Work Ulrich Beck Is Multiculturalism Bad for Women? Susan Moller Okin Writing at the Margin – Discourse Between Anthropology & Medicine (Paper) Arthur Kleinman The Imaginary Institution of Society: Creativity and Autonomy in the Social–historical World Cornelius Castoriadis Public țintă University and research libraries worldwide, especially those with collections in the social sciences. Overarching Topics. Institutions and infrastructure of the Social and Behavioral Sciences (H. Anheier). History of the Social and Behavioral Sciences (C. Fleck). Ethics of Research (D. Feinholf, I. Callies, T. Bikson (deceased)). Biographies (J.-C. Marcel, A. Hess, D. Barrett). Methodology. Statistics (Q. Fu, X Guo). Mathematics and Computer Sciences Applications(P. Bonacich). Logic of Inquiry, Databases and Research Design (S.L. Morgan). Disciplines. Anthropology (U. Hannerz, D. Boyer). Archaeology (S. Barber, T. Dupras). Demography (I.T. Elo, A.D. Foster). Economics (T. Nechyba, H. Yeung). Education (C. McBride-Chang, D. Muijs). Geography (S. Hanson, J.D. Sidaway). History (R. Whatmore, R. Hammersley). Law (R. Greenspan, K. Levine). Linguistics A (W.S.Y. Wang). Linguistics B (G. Jarema) Philosophy (C. Mantzavinos). Political Science (H.D. Clarke, M.C. Stewart). Clinical Psychology (W. Miltner). Cognitive Psychology (H. Cohen). Developmental Psychology (H. Keller, D. Crafa). Social Psychology (X. Chryssochoou). Personality Psychology (R.D. Roberts). Motivational Psychology (J. Eccles, K. Salmela-Aro). Sociology (D. S. Massey, Y. Sato). Criminology (K. Parker). Memory: Cognitive and Neuroscientific Aspects (S.D. Sala). Neuroscience of Language (H. Whitaker). Culture and the Arts (K. van Rees) Intersecting Fields. Evolutionary Sciences (A. Mesoudi). Genetics, Behaviour, History and Society (G. Cabana). Behavioral Neuroscience (B. Kolb). Cognitive Neuroscience (S. Cappa). Psychiatry (D. Bhugra). Health (J. Siegrist, C. Vogele). Gay, Lesbian, Bisexual and Trans-sexual Studies (D.C. Barrett). Religious Studies (D. Sherkat). Environmental and Ecological Sciences (M. Fischer-Kowalski, H. Rau, K. Zimmerer). Science and Technology Studies (M. Lynch). Area, Development and International Studies (J. Hass). Studies of the Life Course (M. Mills). Sexuality (L.Waite). Labor Studies (K. Zimmermann). War, Peace, Violence and Conflict (S. Malesevic). Applications. Management, Organizations, Business, Marketing and Finance (P.T. Bryant, J.A. Mathews). Media Studies and Mass Communication (G. Mesch). Urban Studies and Planning (F. Wu). Public Policy (A. Hassel). Contemporary Cultural Concerns (B. Miller). Applied, Industrial and Organizational Psychology (N.M. Ashkanasy). Social Work (L. Dominelli). Applied Social and Behavioral Sciences (D.J. Rog) "...provides a 14-year update to a reference work whose first edition was called by its reviewers, 'the largest corpus of knowledge abou the social and behavioral sciences in existence' and 'the atomic bomb of reference works.'...like its predecessor, a mammoth undertaking..." --PennLibNews "While there are a few 'legacy articles' in the second edition, most of the articles are updated or totally rewritten." --CommPilings "...a fully searchable second edition of the encyclopedia comprised of 26 volumes created by 39 editors and over 4000 authors." --Northwestern University Library "...a 14-year update to a reference work whose first edition was called by its reviewers, 'the largest corpus of knowledge about the social and behavioral sciences in existence'...like its predecessor, a mammoth undertaking..." --Penn Libraries News Center "...represents a grand…gesture of accomplishment, fulfilling the encyclopedic 20th-century ideal of an all-encompassing corpus of work. Summing Up: Recommended." --Choice "Like the pyramids, the work is monumental in scope, will prove to be enduring in its contribution, and is surely one of the great wonders of the scientific world." --Contemporary Psychology, APA Review of Books, 2004 "IESBS, a major social science reference work with a user-friendly interface...Well worth the expense, even to libraries holding the print version..." --Choice
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
274
What function do you tend to use most on your iPhone? My most used function would depend on the time of day, when at work its my ipod, on lunch internet browser, drive home phone/iPod, home sms/internet browser. iPod closely followed by email. This favorite features question gets asked about every day. I just figured I would be nice, this time around. I wasn't trying to be mean about it; I just noticed it gets asked quite a bit. I guess people that don't have an iPhone yet are just asking so they'll know the coolest features when they get theirs. As far as time spend goes I use the Podcast section in the iPod feature. I'm actually using the calendar more than I use it on my PC (or the net since I pull in feeds to iCal from Google Calendar). I've also been using the stopwatch feature to time myself for personal productivity i.e. see how long it takes me to take a shower and get dressed in the morning and also for setting myself deadlines while I'm working.
{ "redpajama_set_name": "RedPajamaC4" }
265
\section{#1}} \renewcommand{\thesection.\arabic{equation}}}{\thesection.\arabic{equation}} \def\titlepage{\@restonecolfalse\if@twocolumn\@restonecoltrue\onecolumn \else \newpage \fi \thispagestyle{empty}\cdot@page\z@ \def\arabic{footnote}{\fnsymbol{footnote}} } \def\endtitlepage{\if@restonecol\twocolumn \else \fi \def\arabic{footnote}{\arabic{footnote}} \setcounter{footnote}{0}} \relax \hybrid \parskip=0.4em \makeatletter \newdimen\normalarrayskip \newdimen\minarrayskip \normalarrayskip\baselineskip \minarrayskip\jot \newif\ifold \oldtrue \def\oldfalse{\oldfalse} \def\arraymode{\ifold\relax\else\displaystyle\fi \def\eqnumphantom{\phantom{(\thesection.\arabic{equation}})}} \def\@arrayskip{\ifold\baselineskip\z@\lineskip\z@ \else \baselineskip\minarrayskip\lineskip1\baselineskip\fi} \def\@arrayclassz{\ifcase \@lastchclass \@acolampacol \or \@ampacol \or \or \or \@addamp \or \@acolampacol \or \@firstampfalse \@acol \fi \edef\@preamble{\@preamble \ifcase \@chnum \hfil$\relax\arraymode\@sharp$\hfil \or $\relax\arraymode\@sharp$\hfil \or \hfil$\relax\arraymode\@sharp$\fi}} \def\@array[#1]#2{\setbox\@arstrutbox=\hbox{\vrule height\arraystretch \ht\strutbox depth\arraystretch \dp\strutbox width\z@}\@mkpream{#2}\edef\@preamble{\halign \noexpand\@halignto \bgroup \tabskip\z@ \@arstrut \@preamble \tabskip\z@ \cr}% \let\@startpbox\@@startpbox \let\@endpbox\@@endpbox \if #1t\vtop \else \if#1b\vbox \else \vcenter \fi\fi \bgroup \let\par\relax \let\@sharp##\let\protect\relax \@arrayskip\@preamble} \def\eqnarray{\stepcounter{equation}% \let\@currentlabel=\thesection.\arabic{equation}} \global\@eqnswtrue \global\@eqcnt\z@ \tabskip\@centering \let\\=\@eqncr $$% \halign to \displaywidth \bgroup \eqnumphantom \@eqnsel \hskip\@centering $\displaystyle \tabskip\z@ {##}$% &\global\@eqcnt\@ne \hskip 2\arraycolsep $ \displaystyle \arraymode{##}$\hfil &\global\@eqcnt\tw@ \hskip 2\arraycolsep $\displaystyle\tabskip\z@{##}$\hfil \tabskip\@centering &{##}\tabskip\z@\cr} \makeatother \newcommand{{\mathbb{R}}}{{\mathbb{R}}} \newcommand{\mathcal{C}}{{\mathbb{C}}} \newcommand{\NN}{{\mathbb{N}}} \newcommand{{\mathbb{H}}}{{\mathbb{H}}} \newcommand{{\mathbb{Z}}}{{\mathbb{Z}}} \newcommand{{\mathbb{T}}}{{\mathbb{T}}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{\overline{\mathbb{C}}}{\overline{\mathbb{C}}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \def\mathbb{A}{\mathbb{A}} \def\mathbb{B}{\mathbb{B}} \def\mathbb{C}{\mathbb{C}} \def\mathbb{D}{\mathbb{D}} \def\mathbb{E}{\mathbb{E}} \def\mathbb{F}{\mathbb{F}} \def\mathbb{G}{\mathbb{G}} \def\mathbb{H}{\mathbb{H}} \def\mathbb{K}{\mathbb{K}} \def\mathbb{L}{\mathbb{L}} \def\mathbb{P}{\mathbb{P}} \def\mathbb{Q}{\mathbb{Q}} \def\mathbb{R}{\mathbb{R}} \def\mathbb{Z}{\mathbb{Z}} \def\mathcal{A} {\mathcal{A}} \def\mathcal{B} {\mathcal{B}} \def\mathcal{C} {\mathcal{C}} \def\mathcal{D} {\mathcal{D}} \def\mathcal{E} {\mathcal{E}} \def\mathcal{F} {\mathcal{F}} \def\mathcal{G} {\mathcal{G}} \def\mathcal{H} {\mathcal{H}} \def\mathcal{I} {\mathcal{I}} \def\mathcal{J} {\mathcal{J}} \def\mathcal{K} {\mathcal{K}} \def\mathcal{L} {\mathcal{L}} \def\mathcal{M} {\mathcal{M}} \def\mathcal{N} {\mathcal{N}} \def{\cal O} {\mathcal{O}} \def\mathcal{P} {\mathcal{P}} \def{\cal Q} {\mathcal{Q}} \def\mathcal{R} {\mathcal{R}} \def\mathcal{S} {\mathcal{S}} \def\mathcal{T} {\mathcal{T}} \def{\cal U} {\mathcal{U}} \def\mathcal{V} {\mathcal{V}} \def\mathcal{W} {\mathcal{W}} \def\mathcal{X} {\mathcal{X}} \def\mathcal{Y} {\mathcal{Y}} \def{\cal Z} {\mathcal{Z}} \def\mathcal{T}_+ {\mathcal{T}_+} \def\mathcal{T}_- {\mathcal{T}_-} \def{\cal A}{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}} \def{\cal H}{{\cal H}} \def{\cal M}{{\cal M}} \def{\cal L}{{\cal L}} \def{\cal O}{{\cal O}} \def{\cal R}{{\cal R}} \def{\cal S}{{\cal S}} \def{\theta} {{\theta}} \def{\Theta} {{\Theta}} \def{\omega} {{\omega}} \def\overline {{\overline}} \def{\alpha} {{\alpha}} \def{\beta} {{\beta}} \def{\gamma} {{\gamma}} \def{\sigma} {{\sigma}} \def\lambda {{\lambda}} \def{\Sigma}{{\Sigma}} \def\alpha{\alpha} \def\lambda{\lambda} \def\lambda{\lambda} \def\varepsilon{\varepsilon} \def\epsilon{\epsilon} \def\partial {\partial} \def\overline {\partial } {\overline {\partial }} \def\bar{i}{\bar{i}} \def\bar{j}{\bar{j}} \def{\bar{u}}{{\bar{u}}} \def\bar{w} {\bar{w}} \def\bar{z} {\bar{z}} \def\overline{k} {\overline{k}} \def\overline{A} {\overline{A}} \def{\widetilde{\omega}} {{\widetilde{\omega}}} \def{\widetilde{\rho}} {{\widetilde{\rho}}} \def\widetilde {{\widetilde}} \def{\mathop{\rm codim}}{{\mathop{\rm codim}}} \def{\rm cok}{{\rm cok}} \def{\mathop {\rm coker}}{{\mathop {\rm coker}}} \def{\rm Ch}{{\rm Ch}} \def{\cal H}{{\rm ch}} \def{\rm Det}{{\rm Det}} \def{\rm DET}{{\rm DET}} \def{\rm diff}{{\rm diff}} \def{\rm Diff}{{\rm Diff}} \def{\rm Id}{{\rm Id}} \def\cdot{\cdot} \def\nabla_{\partial}{\nabla_{\partial}} \def\nabla_{\bar {\partial}}{\nabla_{\bar {\partial}}} \def{\rm Lie}{{\rm Lie}} \def\noindent{\noindent} \def\nonumber{\nonumber} \def{\rm pt}{{\rm pt}} \def{\rm rank}{{\rm rank}} \def{\mathop{\rm Res}}{{\mathop{\rm Res}}} \def{\mathop{\rm Sym}}{{\mathop{\rm Sym}}} \def{\rm Tr}\,{{\rm Tr}} \def{\rm Td}{{\rm Td}} \def{\rm vol}{{\rm vol}} \def{\rm Vol}{{\rm Vol}} \def\mathfrak{\mathfrak} \def{\frak g}{{\mathfrak g}} \def{\frak h}{{\mathfrak h}} \newtheorem{te}{Theorem}[section \newtheorem{de}{Definition}[section] \newtheorem{prop}{Proposition}[section] \newtheorem{cor}{Corollary}[section] \newtheorem{lem}{Lemma}[section] \newtheorem{ex}{Example}[section] \newtheorem{rem}{Remark}[section] \newtheorem{conj}{Conjecture}[section] \newcommand\bqa{\begin{eqnarray}} \newcommand\eqa{\end{eqnarray}} \def\begin{eqnarray}\new\begin{array}{cc}{\begin{eqnarray}\oldfalse\begin{array}{cc}} \def\end{array}\end{eqnarray}{\end{array}\end{eqnarray}} \def\nonumber{\nonumber} \defg{g} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\bse{\begin{subequations}} \def\end{subequations}{\end{subequations}} \def\begin{pmatrix}{\begin{pmatrix}} \def\end{pmatrix}{\end{pmatrix}} \def\be\label{\begin{eqnarray}\new\begin{array}{cc}\label} \def\hbar{\hbar} \def\imath{\imath} \defQ\!\!\!\! Q{Q\!\!\!\! Q} \def\noindent {\it Proof}. {\noindent {\it Proof}. } \newcommand\cod{\operatorname{codim}} \newcommand\im{\operatorname{im}} \newcommand\id{\operatorname{id}} \newcommand\coim{\operatorname{coim}} \newcommand\rk{\operatorname{rank}} \newcommand\ann{\operatorname{ann}} \newcommand{{\bf g}}{{\bf g}} \def\square{\hfill{\vrule height6pt width6pt depth1pt} \break \vspace{.01cm}} \def\ws{\hfill{$\square$}} \def\stack#1#2{\raise0.7pt\hbox{$\mathrel{\mathop{#2}\limits^{#1}}$}} \def{\rm tr}\,{\triangleright} \def\triangleleft{\triangleleft} \def\sem{\mathsurround=0pt \raise1pt \hbox{$\scriptscriptstyle>\!\!$}\:\!\!\triangleleft} \def\mes{\mathsurround=0pt {\rm tr}\,\!\:\!\raise0.8pt \hbox{$\scriptscriptstyle\!\!<$}\,} \def\]{\mathsurround=0pt ]\raise-2pt\hbox{$_\ast$}} \def{\bfit\alpha}{{\bfit\alpha}} \def{\bfit\beta}{{\bfit\beta}} \def{\bfit\gamma}{{\bfit\gamma}} \def\bnu{{\bfit\nu}} \def{\bfit\mu}{{\bfit\mu}} \def{\bfit\omega}{{\bfit\omega}} \def{\bfit\phi}{{\bfit\phi}} \def{\bfit\lambda}{{\bfit\lambda}} \def{\bfit\rho}{{\bfit\rho}} \def\<{\langle} \def\>{\rangle} \def\overline{\overline} \def\widetilde{\widetilde} \def\widehat{\widehat} \def\varkappa{\varkappa} \def{\cal Q}{{\cal Q}} \def\mathfrak{\mathfrak} \def{\cal A}{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}} \def{\cal H}{{\cal H}} \def{\cal M}{{\cal M}} \def{\cal L}{{\cal L}} \def{\cal O}{{\cal O}} \def{\cal O}{{\cal O}} \def{\cal U}{{\cal U}} \def{\cal Z}{{\cal Z}} \def{\cal R}{{\cal R}} \def{\cal S}{{\cal S}} \def\mathcal{H}{\mathcal{H}} \def{\scriptscriptstyle N}{{\scriptscriptstyle N}} \def\ts#1#2{{\textstyle\frac{#1}{#2}}} \def\frak k{\mathfrak k} \def\raise-1pt\hbox{$\,\stackrel{\wedge}{,}\,$}{\raise-1pt\hbox{$\,\stackrel{\wedge}{,}\,$}} \def{\rm tr}\,{{\rm tr}\,} \def{\rm Tr}\,{{\rm Tr}\,} \def\partial {\partial} \defA{A} \defB{B} \defC{C} \defD{D} \defE{E} \defF{F} \defH{H} \def\varphi{\varphi} \newcounter{pac}[section] \newcommand{\npa}{\addtocounter{pac}{1} \noindent {\bf \arabic{section}.\arabic{pac}}\,\,\,} \newcounter{pacc}[subsection] \newcommand{\npaa}{\addtocounter{pacc}{1} \noindent {\bf \arabic{section}.\arabic{subsection}.\arabic{pacc}}\,\,\,} \setcounter{pac}{0} \setcounter{footnote}0 \begin{document} \setcounter{pac}{0} \setcounter{footnote}0 \begin{center} \phantom. \bigski {\Large\bf On modular double of semisimple quantum groups} \vspace{1cm} \bigskip\bigskip {\large Pavel Sultanich \footnote {E-mail: sultanichp@gmail.com}},\\ \bigskip {\it Moscow Center for Continuous Mathematical Education, 119002, Bolshoy Vlasyevsky Pereulok 11, Moscow, Russia }\\ \bigskip \end{center} \begin{abstract} \noindent In this note we propose a construction of the Hopf algebra of a complex analog of devided powers of the Weyl generators of a semisimple simply-laced quantum group. Here we consider the generators as positive, self-adjoint operators. In particular, we generalize the Lusztig relations \cite{Lu 1} on the usual divided powers of generators of a quantum group to the case of complex devided powers of generators. These relations, some of which were known \cite{Ip} present the complete set of defining relations. As a by-product result, the pure algebraic definition of the Faddeev modular double in the case of semisimple simply-laced quantum groups is formulated. Finally, we introduce an infinite dimension version of the Gelfand-Zetlin finite-dimensional representation of the modular double $M_{q,\tilde{q}}(\mathfrak{gl}(N))$. \end{abstract} \vspace{5 mm} \section{Introduction} The notion of modular double of a quantum group was originally introduced by Faddeev \cite{F2}. In case $U_{q}(\mathfrak{sl}(2,\mathbb{R}))$, $q = e^{\pi\imath b^{2}}$, $b^{2}\in \mathbb{R}\setminus\mathbb{Q}$ he noticed that the class of representations of modular double, analogous to the principal series representations of $SL_2(R)$ has a remarkable duality, similar to the duality of non-commutative tori, discovered in \cite{Rif}. More generally, modular double plays an important role in Liouville theory \cite{PT}, \cite{FKV}, relativistic Toda model \cite{KLSTS} and some other problems of mathematical physics. Recently a considerable progress has been made in generalization of modular double to the case of $U_{q}(\mathfrak{g})$, where $\mathfrak{g}$ is a semisimple Lie algebra of rank $r$ \cite{FrIp}, \cite{Ip}, \cite{Ip2}. In these papers certain powers of generators have been used in order to construct the dual quantum group. This idea was first introduced in the case of $U_{q}(\mathfrak{sl}(2,\mathbb{R}))$ in \cite{BT}. Let $U_{q}(\mathfrak{g})$ be the quantum group corresponding to semisimple simply-laced Lie algebra $\mathfrak{g}$ of rank $r$ with Cartan matrix $a_{ij} = a_{ji} \in\{0, -1\}$, for $i\ne j$, $1\le i,j\le r$. Let us assume that generators of $U_{q}(\mathfrak{g})$ are positive self-adjoint operators, then their complex powers can be defined \cite{Sh book}. In Theorem 4.1 the Hopf algebra of arbitrary complex analogs of devided powers of generators has been constructed, which is a generalization of the algebra of divided powers $\frac{X^{N}}{[N]_{q}!}$ studied by Lusztig \cite{Lu 1}. The Hopf algebra of complex devided powers is generated by elements $K_{j}^{\imath p}$, $\mathcal{E}_{j}^{(\imath s)}$, $\mathcal{F}_{j}^{(\imath t)}$, $1\le j\le r$ subjected to certain integral relations. After specification $\imath s = N$, $\imath t = M$, $\imath p = L$, where $N$, $M$, $L$ are positive integers, all the defining integral relations are reduced to the identities for the devided powers of generators of quantum group, proved in \cite{Lu 1}. Some of these relations were known \cite{Ip}, however the other part of the relations including the generalized Kac identities is novel (see Theorem 3.1). The Hopf algebra of complex devided powers is constructed using the classical Drinfeld's double technique \cite{D}. This Hopf algebra contains an important subalgebra generated by $K_{j}$, $\mathcal{E}_{j}$, $\mathcal{F}_{j}$ and $\tilde{K}_{j} = K_{j}^{b^{-2}}$, $\tilde{\mathcal{E}}_{j} = \mathcal{E}_{j}^{b^{-2}}$, $\tilde{\mathcal{F}}_{j} = \mathcal{F}_{j}^{b^{-2}}$, $1\le j\le r$. The integral relations of the full Hopf algebra of complex devided powers are reduced to the standard relations of quantum groups $U_{q}(\mathfrak{g})$, generated by $K_{j}$, $\mathcal{E}_{j}$, $\mathcal{F}_{j}$, $1\le j\le r$ and $U_{\tilde{q}}(\mathfrak{g})$ generated by $\tilde{K}_{j}$, $\tilde{\mathcal{E}}_{j}$, $\tilde{\mathcal{F}}_{j}$, $1\le j\le r$ with $\tilde{q} = e^{\pi\imath b^{-2}}$, together with non-trivial cross-relations. This subalgebra has been the subject of active research recently \cite{BT}, \cite{FrIp}, \cite{Ip}, \cite{Ip2}, and it is usually associated with the modular double construction. However, the phenomenon of non-triviality of cross-relations was studied only for special representations. In the present paper we propose a natural derivation of full set of relations of modular double from the integral relations of the algebra of complex powers (see Theorem 5.1). Next we study an algebraic definition of modular double. The modular double is a Hopf algebra with the two sets of generators: generators $K_{j}$, $\mathcal{E}_{j}$, $\mathcal{F}_{j}$, $1\le j\le r$ subjected to standard relations of $U_{q}(\mathfrak{g})$, and additional generators $\tilde{K}_{j}$, $\tilde{\mathcal{E}}_{j}$, $\tilde{\mathcal{F}}_{j}$, $1\le j\le r$ subjected the standard relations of $U_{\tilde{q}}(\mathfrak{g})$, together with the same cross relations but free of the constraints $\tilde{K}_{j} = K_{j}^{b^{-2}}$, $\tilde{\mathcal{E}}_{j} = \mathcal{E}_{j}^{b^{-2}}$, $\tilde{\mathcal{F}}_{j} = \mathcal{F}_{j}^{b^{-2}}$, $1\le j\le r$. The algebraic definition is motivated by the fact that these Hopf algebras have two types of realizations of the principal series representations. In Theorem 5.2 a realization of representation of the modular double of $U_{q}(\mathfrak{gl}(N))$ of the first type is given. This realization of the principal series representation \cite{GKL1}, \cite{GKL2}, \cite{GKLO} of the modular double can be viewed as an infinite-dimensional analog of the Gelfand-Zetlin finite-dimensional representation for classical groups \cite{GZ}. The realizations of second type \cite{FrIp}, \cite{Ip2} are the principal series representations in Lusztig parametrization. These representations are $q$-deformed versions of representations of universal enveloping algebra $U(\mathfrak{g})$ by differential operators in Lusztig parametrization introduced in \cite{GLO}(sections 2.4.1-2.4.4) for classical series Lie algebras. The non-trivial properties of the representations are transcendental relations $\tilde{K}_{j} = K_{j}^{b^{-2}}$, $\tilde{\mathcal{E}}_{j} = \mathcal{E}_{j}^{b^{-2}}$, $\tilde{\mathcal{F}}_{j} = \mathcal{F}_{j}^{b^{-2}}$, $1\le j\le r$. Using the classical limit of this realization leads to construction of Whittaker function in terms of the stationary phase integral, generalizing Givental's formula, see \cite{GLO}. {\bf Acknowledgements:} The research was supported by RSF (project № 16-11-10075). I am grateful to D.R. Lebedev for the statement of the problem and his interest in this work. \section{Preliminaries} We start with the definition of quantum groups following \cite{ChPr},\cite{Lu book}. Let $(a_{ij})_{1\le i,j\le r}$ be Cartan matrix of semisimple Lie algebra $\mathfrak{g}$ of rank $r$. Let $\mathfrak{b}_{\pm}\subset \mathfrak{g}$ be opposite Borel subalgebras. For simplicity let us restrict ourselves to the simply-laced case $a_{ii} = 2$, $a_{ij} = a_{ji} = \{0,-1\}$, $i\ne j$. Let $U_{q}(\mathfrak{g})$ $(q = e^{\pi\imath b^{2}}$, $b^{2}\in \mathbb{R}\setminus \mathbb{Q})$ be the quantum group with generators $E_{j}$, $F_{j}$, $K_{j} = q^{H_{j}}$, $1\le j \le r$ and relations \begin{equation} K_{i}K_{j} = K_{j}K_{i}, \end{equation} \begin{equation} K_{i}E_{j} = q^{a_{ij}}E_{j}K_{i}, \end{equation} \begin{equation} K_{i}F_{j} = q^{-a_{ij}}F_{j}K_{i}, \end{equation} \begin{equation} E_{i}F_{j} - F_{j}E_{i} = \delta_{ij}\frac{K_{i} - K_{i}^{-1}}{q-q^{-1}}. \end{equation} For $a_{ij} = 0$ we have \begin{equation} E_{i}E_{j} = E_{j}E_{i}, \end{equation} \begin{equation} F_{i}F_{j} = F_{j}F_{i}. \end{equation} For $a_{ij} = -1$ we have \begin{equation} E_{i}^{2}E_{j} - (q+q^{-1})E_{i}E_{j}E_{i} + E_{j}E_{i}^{2} = 0, \end{equation} \begin{equation} F_{i}^{2}F_{j} - (q+q^{-1})F_{i}F_{j}F_{i} + F_{j}F_{i}^{2} = 0, \end{equation} Coproduct is given by \begin{equation} \Delta E_{j} = E_{j}\otimes 1 + K_{j}^{-1}\otimes E_{j}, \end{equation} \begin{equation} \Delta F_{j} = 1\otimes F_{j} + F_{j}\otimes K_{j}, \end{equation} \begin{equation} \Delta K_{j} = K_{j}\otimes K_{j}. \end{equation} Non-compact quantum dilogarithm $G_{b}(z)$ is a special function introduced in \cite{F1} (see also \cite{F0}, \cite{FKV}, \cite{V}, \cite{Ka}, \cite{KLSTS}, \cite{BT}). It is defined as follows \begin{equation} \log G_{b}(z) = \log\bar{\zeta}_{b} - \int\limits_{\mathbb{R}+\imath 0} \frac{dt}{t}\frac{e^{zt}}{(1-e^{bt})(1-e^{b^{-1}t})}, \end{equation} where $Q = b+b^{-1}$ and $\zeta_{b} = e^{\frac{\pi\imath}{4} + \frac{\pi\imath(b^{2}+b^{-2})}{12}}$. Note, that $G_{b}(z)$ is closely related to the double sine function $S_{2}(z|\omega_{1},\omega_{2})$, see eq.(A.22) in \cite{KLSTS}. Below we outline some properties of $G_{b}(z)$, for details see appendix.\\* 1. The function $G_{b}(z)$ has simple poles and zeros at the points \begin{equation} z = -n_{1}b -n_{2}b^{-1}, \end{equation} \begin{equation} z = Q +n_{1}b + n_{2}b^{-1}, \end{equation} respectively, where $n_{1}$,$n_{2}$ are nonnegative integer numbers.\\* 2. $G_{b}(z)$ has the following asymptotic behavior: \begin{equation} G_{b}(z) \sim \begin{cases} \bar{\zeta}_{b}, Im z \rightarrow +\infty ,\\ \zeta_{b} e^{\pi\imath z(z-Q)}, Im z \rightarrow -\infty . \end{cases} \end{equation} 3. Functional equation: \begin{equation} G_{b}(z +b^{\pm 1}) = (1-e^{2\pi\imath b^{\pm 1}z})G_{b}(z). \end{equation} 4. Reflection formula: \begin{equation} G_{b}(z)G_{b}(Q-z) = e^{\pi\imath z(z-Q)}. \end{equation} For $1\le j \le r$ let us introduce the following rescaled generators \begin{equation}\label{rescaled E} \mathcal{E}_{j} = -\imath (q-q^{-1})E_{j}, \end{equation} \begin{equation}\label{rescaled F} \mathcal{F}_{j} = -\imath (q-q^{-1})F_{j}. \end{equation} We will assume the elements $\mathcal{E}_{j}$, $\mathcal{F}_{j}$, $K_{j}$, $1 \le i \le r$ to be represented by positive self-adjoint operators. For such operator $A$ one can define its arbitrary complex powers \cite{Sh book} so that the following properties are satisfied \\* 1. $A^{\imath s}$ is holomorphic in the half-plane $Im(s)> -k$ for any $k\in \mathbb{Z}$. \\* 2. Group property \begin{equation} A^{\imath s_{1}}A^{\imath s_{2}} = A^{\imath s_{1} + \imath s_{2}}. \end{equation} 3. For $\imath s = k$, $k\in\mathbb{Z}$, $A^{k}$ is an ordinary power of $A$.\\* Next, we define the arbitrary devided powers of $A$ by \begin{equation}\label{complex devided power} A^{(\imath s)} = G_{b}(-\imath bs)A^{\imath s}. \end{equation} We are going to consider the algebra spanned by the elements $\mathcal{E}_{j}^{(\imath s)}$, $\mathcal{F}_{j}^{(\imath t)}$, $K_{j}^{\imath p}$, $1 \le j \le r$. \section{Generalized Kac's identity} Below we give a generalization of the Kac's identity \cite{Lu 1}, eq.(4.1a) to the case of complex powers of generators. \begin{te} Let $q = e^{\pi\imath b^{2}}$, $(b^{2}\in \mathbb{R}\setminus \mathbb{Q})$ and let $K_{j} = q^{H_{j}}$, $\mathcal{E}_{j} = -\imath (q-q^{-1})E_{j}$, $\mathcal{F}_{j} = -\imath (q-q^{-1})F_{j}$, $1\le j\le r$ be positive self-adjoint operators and let $A^{(\imath s)}$ be defined by (\ref{complex devided power}). Then the following generalized Kac's identity holds: \bigskip \begin{equation}\begin{split} \mathcal{E}_{j}^{(\imath s)}\mathcal{F}_{j}^{(\imath t)} = \int\limits_{\mathcal{C}} d\tau e^{\pi bQ\tau}\mathcal{F}_{j}^{(\imath t+\imath\tau)}K_{j}^{-\imath \tau} \frac{G_{b}(\imath b\tau)G_{b}(-bH_{j} + \imath b(s+t+\tau))}{G_{b}(-bH_{j}+\imath b(s+t+2\tau))}\mathcal{E}_{j}^{(\imath s + \imath \tau)}, \end{split}\end{equation} \bigskip where the contour $\mathcal{C}$ goes along the real axis above the sequences of poles going down: \\* $\tau = -s-\imath n_{1}-\imath n_{2}b^{-2}$, $\tau = -t-\imath n_{1}-\imath n_{2}b^{-2}$, $\tau = -\frac{\imath b^{-1}Q}{2}-\frac{\imath H_{j}}{2} -\frac{s}{2}-\frac{t}{2}-\frac{\imath n_{1}}{2}-\frac{\imath b^{-2}n_{2}}{2}$, \\* and below the sequences of poles going up:\\* $\tau = \imath n_{1} +\imath n_{2}b^{-2}$, $\tau = -\imath H_{j}-s-t+\imath n_{1}+\imath n_{2}b^{-2}$,\\* where $n_{1}$, $n_{2}$ are non-negative integers. \end{te} $\noindent {\it Proof}. $ The proof is based on the Drinfeld's double construction \cite{D} and will be published elsewhere. $\Box$ Let $N$, $M$ be positive integers. Define the integer devided powers of generators $E_{j}$, $F_{j}$, $1\le j \le r$ of quantum group $U_{q}(\mathfrak{g})$: \begin{equation} E_{j}^{(N)} = \frac{\prod\limits_{k=1}^{N}(q-q^{-1})}{\prod\limits_{k=1}^{N}(q^{k}-q^{-k})}E_{j}^{N}, \end{equation} \begin{equation} F_{j}^{(M)} = \frac{\prod\limits_{k=1}^{M}(q-q^{-1})}{\prod\limits_{k=1}^{M}(q^{k}-q^{-k})}F_{j}^{M}. \end{equation} \begin{cor} Let $\imath s = N$, $\imath t = M$, where $N$, $M$ are non-negative integers. Then the generalized Kac's identity reduces to the standard Kac's identity \cite{Lu 1}, eq.(4.1a): \begin{equation} E_{j}^{(N)}F_{j}^{(M)} = \sum\limits_{n=0}^{min(N,M)} F_{j}^{(M-n)} \frac{\prod\limits_{k=1}^{n}(q^{-N-M+n+k}K_{j}-q^{N+M-n-k}K_{j}^{-1})}{\prod\limits_{k=1}^{n}(q^{k}-q^{-k})} E_{j}^{(N-n)}, \end{equation} where $1\le j\le r$. Substituting into this formula $N = M = 1$ one recovers the relation \begin{equation} [E_{j},F_{j}] = \frac{K_{j}-K_{j}^{-1}}{q-q^{-1}}, \end{equation} for $1\le j\le r$. \end{cor} $\noindent {\it Proof}. $ Rewrite the generalized Kac's identity in the following form: $$ \mathcal{E}_{j}^{\imath s}\mathcal{F}_{j}^{\imath t} = \int d\tau e^{\pi bQ\tau} \frac{G_{b}(\imath b\tau)G_{b}(-\imath bt-\imath b\tau)}{G_{b}(-\imath bt)} \frac{G_{b}(-\imath bs-\imath b\tau)}{G_{b}(-\imath bs)}\mathcal{F}_{j}^{\imath t+\imath\tau} K_{j}^{-\imath\tau} \frac{G_{b}(-bH_{j}+\imath b(s+t+\tau))}{G_{b}(-bH_{j}+\imath b(s+t+2\tau))}\mathcal{E}_{j}^{\imath s+\imath\tau}. $$ Set $\imath t = M$ to be some nonnegative integer. Then using delta-distribution formula \cite{Ip} \begin{equation}\begin{split} \frac{G_{b}(x)G_{b}(-N_{1}b-N_{2}b^{-1}-x)}{G_{b}(-N_{1}b-N_{2}b^{-1})} = \sum\limits_{n_{1}=0}^{N_{1}}\sum\limits_{n_{2}=0}^{N_{2}}\frac{\prod\limits_{k_{1}=1}^{N_{1}}(1-q^{-2k_{1}})}{\prod\limits_{k_{1}=1}^{n_{1}}(1-q^{-2k_{1}})\prod\limits_{k_{1}=1}^{N_{1}-n_{1}}(1-q^{-2k_{1}})} \\ \times \frac{\prod\limits_{k_{2}=1}^{N_{2}}(1-\tilde{q}^{-2k_{2}})}{\prod\limits_{k_{2}=1}^{n_{2}}(1-\tilde{q}^{-2k_{2}})\prod\limits_{k_{2}=1}^{N_{2}-n_{2}}(1-\tilde{q}^{-2k_{2}})} \delta(x+n_{1}b+n_{2}b^{-1}), \end{split}\end{equation} we obtain $$ \mathcal{E}_{j}^{\imath s}\mathcal{F}_{j}^{M} = \sum\limits_{n=0}^{M} \frac{\prod\limits_{k=1}^{M}(1-q^{-2k})}{\prod\limits_{k=1}^{n}(1-q^{-2k})\prod\limits_{k=1}^{M-n}(1-q^{-2k})}\times $$ $$ \int d\tau e^{\pi bQ\tau}\delta(\imath b\tau + nb) \frac{G_{b}(-\imath bs-\imath b\tau)}{G_{b}(-\imath bs)}\mathcal{F}_{j}^{M+\imath\tau} K_{j}^{-\imath\tau} \frac{G_{b}(-bH_{j}+\imath bs+Mb+\imath b\tau)}{G_{b}(-bH_{j}+\imath bs+Mb+2\imath b\tau)}\mathcal{E}_{j}^{\imath s+\imath\tau} = $$ $$ \sum\limits_{n=0}^{M} \frac{\prod\limits_{k=1}^{M}(1-q^{-2k})}{\prod\limits_{k=1}^{n}(1-q^{-2k})\prod\limits_{k=1}^{M-n}(1-q^{-2k})} (-1)^{n}q^{n}\times $$ $$ \frac{G_{b}(-\imath bs+nb)}{G_{b}(-\imath bs)}\mathcal{F}_{j}^{M-n}K^{n} \frac{G_{b}(-bH_{j}+\imath bs+Mb-2nb+nb)}{G_{b}(-bH_{j}+\imath bs+Mb-2nb)}\mathcal{E}_{j}^{\imath s-n} = $$ $$ \sum\limits_{n=0}^{M} \frac{\prod\limits_{k=1}^{M}(1-q^{-2k})}{\prod\limits_{k=1}^{n}(1-q^{-2k})\prod\limits_{k=1}^{M-n}(1-q^{-2k})} (-1)^{n}q^{n} \prod\limits_{k=0}^{n-1}(1-q^{2k}e^{2\pi b^{2}s})\mathcal{F}_{j}^{M-n}K^{n}\times $$ $$ \times\prod\limits_{k=0}^{n-1}(1-q^{2k}e^{2\pi\imath b(-bH_{j}+\imath bs+Mb-2nb)})\mathcal{E}_{j}^{\imath s-n}. $$ To write the last line we have used the functional equation for quantum dilogarithm: \begin{equation} \frac{G_{b}(x+n_{1}b+n_{2}b^{-1})}{G_{b}(x)} = \prod\limits_{k_{1} =0}^{n_{1}-1}(1-q^{2k_{1}}e^{2\pi\imath bx})\prod\limits_{k_{2} =0}^{n_{2}-1}(1-\tilde{q}^{2k_{2}}e^{2\pi\imath b^{-1}x}). \end{equation} We have $$ \mathcal{E}_{j}^{\imath s}\mathcal{F}_{j}^{M} = \sum\limits_{n=0}^{M} \frac{\prod\limits_{k=1}^{M}(q^{k}-q^{-k})}{\prod\limits_{k=1}^{n}(q^{k}-q^{-k})\prod\limits_{k=1}^{M-n}(q^{k}-q^{-k})}(-1)^{n} \prod\limits_{k=0}^{n-1}(q^{-k}e^{-\pi b^{2}s} - q^{k}e^{\pi b^{2}s})\times $$ $$ \mathcal{F}_{j}^{M-n} \prod\limits_{k=0}^{n-1}(e^{\pi b^{2}s-\pi\imath b^{2}M+2\pi\imath b^{2}n-\pi\imath b^{2}k}K_{j} -e^{-\pi b^{2}s+\pi\imath b^{2}M-2\pi\imath b^{2}n+\pi\imath b^{2}k}K_{j}^{-1})\mathcal{E}_{j}^{\imath s-n}. $$ If $M=1$ then we get the fomula from \cite{BT}, eq.(3.25) \begin{equation} [\mathcal{E}_{j}^{\imath s},\mathcal{F}_{j}] = -(q^{\imath s} - q^{-\imath s})(q^{H_{j}-\imath s+1} - q^{-H_{j}+\imath s-1})\mathcal{E}_{j}^{\imath s-1}, \end{equation} which was derived in representation in \cite{BT}. Now let $M$ again be arbitrary positive integer, set $\imath s = N$ and substitute $$ \mathcal{E}_{j}^{N} = (-\imath)^{N}\prod\limits_{k=1}^{N}(q^{k}-q^{-k})E_{j}^{(N)}, $$ $$ \mathcal{F}_{j}^{M} = (-\imath)^{M}\prod\limits_{k=1}^{M}(q^{k}-q^{-k})F_{j}^{(M)}, $$ to obtain the Kac formula: $$ E_{j}^{(N)}F_{j}^{(M)} = \sum\limits_{n=0}^{min(N,M)} F_{j}^{(M-n)} \frac{\prod\limits_{k=1}^{n}(q^{-N-M+n+k}K_{j}-q^{N+M-n-k}K_{j}^{-1})}{\prod\limits_{k=1}^{n}(q^{k}-q^{-k})} E_{j}^{(N-n)}. $$ Let in the generalized Kac's identity $\imath s = 1$ and let $\imath t$ be arbitrary. Then by evaluating the integral as above one can prove the formula \cite{BT}, eq.(3.25) \begin{equation} [\mathcal{E}_{j},\mathcal{F}_{j}^{\imath t}] = -(q^{\imath t}-q^{-\imath t})\mathcal{F}_{j}^{\imath t-1}(q^{H_{j}-\imath t+1} - q^{-H_{j}+\imath t-1}). \end{equation} $\Box$ Let $\tilde{q} = e^{\pi\imath b^{-2}}$. Define the dual generators $\tilde{K}_{j}$, $\tilde{E}_{j}$, $\tilde{F}_{j}$, $1\le j\le r$ by \begin{equation} \tilde{K}_{j} = K_{j}^{b^{-2}}, \end{equation} \begin{equation} \tilde{E}_{j} = \frac{\imath}{\tilde{q}-\tilde{q}^{-1}}\mathcal{E}_{j}^{b^{-2}}, \end{equation} \begin{equation} \tilde{F}_{j} = \frac{\imath}{\tilde{q}-\tilde{q}^{-1}}\mathcal{F}_{j}^{b^{-2}}, \end{equation} and devided powers of $\tilde{E}_{j}$ and $\tilde{F}_{j}$, $1\le j\le r$: \begin{equation} \tilde{E}_{j}^{(N)} = \frac{\prod\limits_{k=1}^{N}(\tilde{q}-\tilde{q}^{-1})}{\prod\limits_{k=1}^{N}(\tilde{q}^{k}-\tilde{q}^{-k})}\tilde{E}_{j}^{N}, \end{equation} \begin{equation} \tilde{F}_{j}^{(M)} = \frac{\prod\limits_{k=1}^{M}(\tilde{q}-\tilde{q}^{-1})}{\prod\limits_{k=1}^{M}(\tilde{q}^{k}-\tilde{q}^{-k})}\tilde{F}_{j}^{M}. \end{equation} \begin{cor} By setting $\imath s = Nb^{-2}$, $\imath t = Mb^{-2}$ in the generalized Kac's identity, where $N$, $M$ are non-negative integers, one obtains the standard Kac's identity for the dual quantum group: \begin{equation} \tilde{E}_{j}^{(N)}\tilde{F}_{j}^{(M)} = \sum\limits_{n=0}^{min(N,M)} \tilde{F}_{j}^{(M-n)} \frac{\prod\limits_{k=1}^{n}(\tilde{q}^{-N-M+n+k}\tilde{K}_{j}-\tilde{q}^{N+M-n-k}\tilde{K}_{j}^{-1})}{\prod\limits_{k=1}^{n}(\tilde{q}^{k}-\tilde{q}^{-k})} \tilde{E}_{j}^{(N-n)}, \end{equation} where $1\le j\le r$. Taking $N = M = 1$ we obtain one of the defining relations of the dual quantum group \begin{equation} [\tilde{E}_{j},\tilde{F}_{j}] = \frac{\tilde{K}_{j}-\tilde{K}_{j}^{-1}}{\tilde{q}-\tilde{q}^{-1}}, \end{equation} $1\le j\le r$. \end{cor} $\noindent {\it Proof}. $ Following the same steps as in the proof of previous corollary with the only difference that now $\imath t = Mb^{-2}$ and $\imath s = Nb^{-2}$ and substituting for $1\le j\le r$ $$ \mathcal{E}_{j}^{Nb^{-2}} = (-\imath)^{N}\prod\limits_{k=1}^{N}(\tilde{q}^{k}-\tilde{q}^{-k})\tilde{E}_{j}^{(N)}, $$ $$ \mathcal{F}_{j}^{Mb^{-2}} = (-\imath)^{M}\prod\limits_{k=1}^{M}(\tilde{q}^{k}-\tilde{q}^{-k})\tilde{F}_{j}^{(M)}, $$ we obtain the Kac's identity for the modular dual group with the parameter $\tilde{q} = e^{\pi\imath b^{-2}}$ $$ \tilde{E}_{j}^{(N)}\tilde{F}_{j}^{(M)} = \sum\limits_{n=0}^{min(N,M)} \tilde{F}_{j}^{(M-n)} \frac{\prod\limits_{k=1}^{n}(\tilde{q}^{-N-M+n+k}\tilde{K}_{j}-\tilde{q}^{N+M-n-k}\tilde{K}_{j}^{-1})}{\prod\limits_{k=1}^{n}(\tilde{q}^{k}-\tilde{q}^{-k})} \tilde{E}_{j}^{(N-n)}. $$ $\Box$ \begin{cor} \begin{equation} [\mathcal{E}_{j}^{\imath s},\mathcal{F}_{j}] = -(q^{\imath s} - q^{-\imath s})(q^{H_{j}-\imath s+1} - q^{-H_{j}+\imath s-1})\mathcal{E}_{j}^{\imath s-1}, \end{equation} \begin{equation} [\mathcal{E}_{j},\mathcal{F}_{j}^{\imath t}] = -(q^{\imath t}-q^{-\imath t})\mathcal{F}_{j}^{\imath t-1}(q^{H_{j}-\imath t+1} - q^{-H_{j}+\imath t-1}), \end{equation} where $1\le j\le r$. Let $\tilde{\mathcal{E}}_{j} = \mathcal{E}_{j}^{b^{-2}}$, $\tilde{\mathcal{F}}_{j} = \mathcal{F}_{j}^{b^{-2}}$, $1\le j\le r$. Then setting $\imath s = b^{-2}$ and $\imath t = b^{-2}$ one obtains \begin{equation} [\tilde{\mathcal{E}}_{j},\mathcal{F}_{j}] = [\mathcal{E}_{j},\tilde{\mathcal{F}}_{j}] = 0. \end{equation} \end{cor} $\noindent {\it Proof}. $ See the proof of the Corollary 3.1. $\Box$ \section{Hopf algebra of arbitrary complex devided powers} Let $U_{q}(\mathfrak{g})$ be quantum group corresponding to semisimple simply-laced Lie algebra $\mathfrak{g}$ with Cartan matrix $(a_{ij})$, $1\le i,j\le r$. Recall, that we have introduced the rescaled generators $\mathcal{E}_{j}$, $\mathcal{F}_{j}$ of $U_{q}(\mathfrak{g})$ by the formulas \begin{equation} \mathcal{E}_{j} = -\imath (q-q^{-1})E_{j}, \end{equation} \begin{equation} \mathcal{F}_{j} = -\imath (q-q^{-1})F_{j}. \end{equation} We have also defined arbitrary devided powers \begin{equation} A^{(\imath s)} = G_{b}(-\imath bs)A^{\imath s}. \end{equation} For $a_{ij} = -1$ define non-simple root generators by \begin{equation} \mathcal{E}_{ij} = \frac{q^{\frac{1}{2}}\mathcal{E}_{j}\mathcal{E}_{i}-q^{-\frac{1}{2}}\mathcal{E}_{i}\mathcal{E}_{j}}{q-q^{-1}}, \end{equation} \begin{equation} \mathcal{F}_{ij} = \frac{q^{\frac{1}{2}}\mathcal{F}_{j}\mathcal{F}_{i}-q^{-\frac{1}{2}}\mathcal{F}_{i}\mathcal{F}_{j}}{q-q^{-1}}. \end{equation} The next theorem is a generalization of subalgebra of devided powers $\frac{X^{n}}{[n]_{q}!}$, $X\in U_{q}(\mathfrak{g})$ and relations \cite{Lu 1}, eq.(4.1 a) - eq.(4.1 j) to the entire Hopf algebra of complex devided powers of generators. \begin{te} The elements $K_{j}^{\imath p}$, $\mathcal{E}_{j}^{(\imath s)}$, $\mathcal{F}_{j}^{(\imath t)}$, $1\le j \le r$ generate associative and coassociative Hopf algebra ${A}(\mathfrak{g})$ with the following commutation relations and coproduct \begin{equation} K_{i}^{\imath p_{1}}K_{j}^{\imath p_{2}} = K_{j}^{\imath p_{2}}K_{i}^{\imath p_{1}}, \end{equation} \begin{equation} K_{j}^{\imath p_{1}}K_{j}^{\imath p_{2}} = K_{j}^{\imath p_{1}+\imath p_{2}}, \end{equation} \begin{equation} \mathcal{E}_{j}^{(\imath s_{1})}\mathcal{E}_{j}^{(\imath s_{2})} = \frac{G_{b}(-\imath bs_{1})G_{b}(-\imath bs_{2})}{G_{b}(-\imath bs_{1}-\imath bs_{2})}\mathcal{E}_{j}^{(\imath s_{1}+\imath s_{2})} = \mathcal{E}_{j}^{(\imath s_{2})}\mathcal{E}_{j}^{(\imath s_{1})}, \end{equation} \begin{equation} \mathcal{F}_{j}^{(\imath t_{1})}\mathcal{F}_{j}^{(\imath t_{2})} = \frac{G_{b}(-\imath bt_{1})G_{b}(-\imath bt_{2})}{G_{b}(-\imath bt_{1}-\imath bt_{2})}\mathcal{F}_{j}^{(\imath t_{1}+\imath t_{2})} = \mathcal{F}_{j}^{(\imath t_{2})}\mathcal{F}_{j}^{(\imath t_{1})}, \end{equation} \begin{equation} K_{i}^{\imath p}\mathcal{E}_{j}^{(\imath s)} = e^{-\pi\imath b^{2}a_{ij}ps}\mathcal{E}_{j}^{(\imath s)}K_{i}^{\imath p}, \end{equation} \begin{equation} K_{i}^{\imath p}\mathcal{F}_{j}^{(\imath t)} = e^{\pi\imath b^{2}a_{ij}pt}\mathcal{F}_{j}^{(\imath t)}K_{i}^{\imath p}. \end{equation} If $a_{ij} = 0$, then \begin{equation} \mathcal{E}_{i}^{(\imath s)}\mathcal{E}_{j}^{(\imath t)} = \mathcal{E}_{j}^{(\imath t)}\mathcal{E}_{i}^{(\imath s)}, \end{equation} \begin{equation} \mathcal{F}_{i}^{(\imath s)}\mathcal{F}_{j}^{(\imath t)} = \mathcal{F}_{j}^{(\imath t)}\mathcal{F}_{i}^{(\imath s)}. \end{equation} For $a_{ij} = -1$ we have \begin{equation} \label{generalized Serre E} \mathcal{E}_{i}^{(\imath s)}\mathcal{E}_{j}^{(\imath t)} = e^{-\pi\imath b^{2}st}\int\limits_{\Gamma_{1}} d\tau e^{\frac{\pi\imath b^{2}\tau^{2}}{2}-\pi bQ\tau} \mathcal{E}_{j}^{(\imath t-\imath\tau)}\mathcal{E}_{ij}^{(\imath\tau)}\mathcal{E}_{i}^{(\imath s-\imath\tau)}, \end{equation} \begin{equation}\label{generalized Serre F} \mathcal{F}_{i}^{(\imath s)}\mathcal{F}_{j}^{(\imath t)} = e^{-\pi\imath b^{2}st}\int\limits_{\Gamma_{1}} d\tau e^{\frac{\pi\imath b^{2}\tau^{2}}{2}-\pi bQ\tau} \mathcal{F}_{j}^{(\imath t-\imath\tau)}\mathcal{F}_{ij}^{(\imath\tau)}\mathcal{F}_{i}^{(\imath s-\imath\tau)}, \end{equation} where the contour $\Gamma_{1}$ goes above the pole at $\tau = 0$ and below the poles at $\tau = s$, $\tau = t$.\\* For $i\ne j$ we have \begin{equation} \mathcal{E}_{i}^{(\imath s)}\mathcal{F}_{j}^{(\imath t)} = \mathcal{F}_{j}^{(\imath t)}\mathcal{E}_{i}^{(\imath s)}. \end{equation} \begin{equation}\label{generalized Kac's identity} \mathcal{E}_{j}^{(\imath s)}\mathcal{F}_{j}^{(\imath t)} = \int\limits_{\mathcal{C}} d\tau e^{\pi bQ\tau}\mathcal{F}_{j}^{(\imath t+\imath\tau)}K_{j}^{-\imath \tau} \frac{G_{b}(\imath b\tau)G_{b}(-bH_{j} + \imath b(s+t+\tau))}{G_{b}(-bH_{j}+\imath b(s+t+2\tau))}\mathcal{E}_{j}^{(\imath s + \imath \tau)}, \end{equation} where the contour $\mathcal{C}$ is defined in Theorem 3.1.\\* Coproduct consistent with commutation relations has the form \begin{equation} \Delta K_{j}^{\imath p} = K_{j}^{\imath p}\otimes K_{j}^{\imath p}, \end{equation} \begin{equation} \Delta\mathcal{E}_{j}^{(\imath s)} = \int\limits_{\Gamma_{2}} d\tau \mathcal{E}_{j}^{(\imath s -\imath\tau)}K_{j}^{-\imath\tau}\otimes \mathcal{E}_{j}^{(\imath\tau)}, \end{equation} where $\Gamma_{2}$ goes above the pole at $\tau = 0$ and below the pole at $\tau = s$. \begin{equation} \Delta\mathcal{F}_{j}^{(\imath t)} = \int\limits_{\Gamma_{3}} d\tau \mathcal{F}_{j}^{(\imath\tau)}\otimes \mathcal{F}_{j}^{(\imath t-\imath\tau)}K_{j}^{\imath\tau}. \end{equation} $\Gamma_{3}$ goes above the pole at $\tau = 0$ and below the pole at $\tau = t$. \end{te} \begin{rem} The relations (\ref{generalized Serre E}), (\ref{generalized Serre F}) appeared in \cite{Ip}, eq.(6.16). The formula (\ref{generalized Kac's identity}) is a new one. \end{rem} \begin{cor} Let $\tilde{K}_{i} = K_{i}^{b^{-2}}$, $1\le i\le r$. Then any element from both sets $K_{i}$, $1\le i\le r$ and $\tilde{K}_{j}$, $1\le j\le r$ commute with any other element from these sets. \end{cor} $\noindent {\it Proof}. $ The statement of the corollary follows straightforwardly from formulas (4.6), (4.7) if we take $\imath p_{1} = 1$, $\imath p_{2} = b^{-2}$. $\Box$ \begin{cor} Let $\tilde{\mathcal{E}}_{j} = \mathcal{E}_{j}^{b^{-2}}$ and $\tilde{\mathcal{F}}_{j} = \mathcal{F}_{j}^{b^{-2}}$, $1\le j\le r$. Then \begin{equation} \tilde{\mathcal{E}}_{j}\mathcal{E}_{j} = \mathcal{E}_{j}\tilde{\mathcal{E}}_{j}, \end{equation} \begin{equation} \tilde{\mathcal{F}}_{j}\mathcal{F}_{j} = \mathcal{F}_{j}\tilde{\mathcal{F}}_{j}, \end{equation} where $1\le j\le r$. \end{cor} $\noindent {\it Proof}. $ The statement of the corollary follows immediately from formulas (4.8), (4.9) in the case of $\imath s_{1} = 1$, $\imath s_{2} = b^{-2}$ and $\imath t_{1} = 1$, $\imath t_{2} = b^{-2}$. $\Box$ \begin{cor} The following relations hold \begin{equation} K_{i}\mathcal{E}_{j} = q^{a_{ij}}\mathcal{E}_{j}K_{i}, \end{equation} \begin{equation} K_{i}\mathcal{F}_{j} = q^{-a_{ij}}\mathcal{F}_{j}K_{i}, \end{equation} \begin{equation} \tilde{K}_{i}\tilde{\mathcal{E}}_{j} = \tilde{q}^{a_{ij}}\tilde{\mathcal{E}}_{j}\tilde{K}_{i}, \end{equation} \begin{equation} \tilde{K}_{i}\tilde{\mathcal{F}}_{j} = \tilde{q}^{-a_{ij}}\tilde{\mathcal{F}}_{j}\tilde{K}_{i}, \end{equation} \begin{equation} K_{i}\tilde{\mathcal{E}}_{j} = (-1)^{a_{ij}}\tilde{\mathcal{E}}_{j}K_{i}, \end{equation} \begin{equation} K_{i}\tilde{\mathcal{F}}_{j} = (-1)^{a_{ij}}\tilde{\mathcal{F}}_{j}K_{i}, \end{equation} \begin{equation} \tilde{K}_{i}\mathcal{E}_{j} = (-1)^{a_{ij}}\mathcal{E}_{j}\tilde{K}_{i}, \end{equation} \begin{equation} \tilde{K}_{i}\mathcal{F}_{j} = (-1)^{a_{ij}}\mathcal{F}_{j}\tilde{K}_{i}. \end{equation} \end{cor} $\noindent {\it Proof}. $ All these relations follow from (4.10), (4.11) if we take $\imath p$, $\imath s$, $\imath t$ to be equal to $1$ and $b^{-2}$ in different combinations. $\Box$ \begin{cor} Let $a_{ij} = 0$, $1\le i,j\le r$. Then \begin{equation} [\mathcal{E}_{i},\mathcal{E}_{j}] = [\tilde{\mathcal{E}}_{i},\tilde{\mathcal{E}}_{j}] = [\tilde{\mathcal{E}}_{i},\mathcal{E}_{j}] = [\mathcal{E}_{i},\tilde{\mathcal{E}}_{j}]= 0, \end{equation} \begin{equation} [\mathcal{F}_{i},\mathcal{F}_{j}] = [\tilde{\mathcal{F}}_{i},\tilde{\mathcal{F}}_{j}] = [\tilde{\mathcal{F}}_{i},\mathcal{F}_{j}] = [\mathcal{F}_{i},\tilde{\mathcal{F}}_{j}]= 0. \end{equation} \end{cor} $\noindent {\it Proof}. $ These relations are consequences of (4.12), (4.13) if we take $\imath s$, $\imath t$ to be equal to $1$ and $b^{-2}$ in different combinations. $\Box$ \begin{cor} Let $i\ne j$, $1\le i,j\le r$. Then \begin{equation} [\mathcal{E}_{i},\mathcal{F}_{j}] = [\tilde{\mathcal{E}}_{i},\tilde{\mathcal{F}}_{j}] = [\tilde{\mathcal{E}}_{i},\mathcal{F}_{j}] = [\mathcal{E}_{i},\tilde{\mathcal{F}}_{j}] = 0. \end{equation} \end{cor} $\noindent {\it Proof}. $ These relations follow from (4.16) if we take $\imath s$, $\imath t$ to be equal to $1$ and $b^{-2}$ in different combinations. $\Box$ \begin{cor} Let $a_{ij} = -1$ and let $N$, $M$ be non-negative integers. Then the following identities hold \begin{equation} \mathcal{E}_{i}^{N}\mathcal{E}_{j}^{M} = q^{NM}\sum\limits_{n=0}^{min(N,M)}(-1)^{n}q^{-\frac{n^{2}}{2}+n} \frac{\prod\limits_{k=1}^{N}(1-q^{-2k})\prod\limits_{k=1}^{M}(1-q^{-2k})} {\prod\limits_{k=1}^{N-n}(1-q^{-2k})\prod\limits_{k=1}^{M-n}(1-q^{-2k})\prod\limits_{k=1}^{n}(1-q^{-2k})} \mathcal{E}_{j}^{M-n}\mathcal{E}_{ij}^{n}\mathcal{E}_{i}^{N-n}, \end{equation} \begin{equation} \mathcal{F}_{i}^{N}\mathcal{F}_{j}^{M} = q^{NM}\sum\limits_{n=0}^{min(N,M)}(-1)^{n}q^{-\frac{n^{2}}{2}+n} \frac{\prod\limits_{k=1}^{N}(1-q^{-2k})\prod\limits_{k=1}^{M}(1-q^{-2k})} {\prod\limits_{k=1}^{N-n}(1-q^{-2k})\prod\limits_{k=1}^{M-n}(1-q^{-2k})\prod\limits_{k=1}^{n}(1-q^{-2k})} \mathcal{F}_{j}^{M-n}\mathcal{F}_{ij}^{n}\mathcal{F}_{i}^{N-n}. \end{equation} Substituting $N = M = 1$, one recovers the definition of non-simple roots \cite{Ip}, eq.(6.18) \begin{equation} \mathcal{E}_{ij} = \frac{q^{\frac{1}{2}}\mathcal{E}_{j}\mathcal{E}_{i}-q^{-\frac{1}{2}}\mathcal{E}_{i}\mathcal{E}_{j}}{q-q^{-1}}, \end{equation} \begin{equation} \mathcal{F}_{ij} = \frac{q^{\frac{1}{2}}\mathcal{F}_{j}\mathcal{F}_{i}-q^{-\frac{1}{2}}\mathcal{F}_{i}\mathcal{F}_{j}}{q-q^{-1}}. \end{equation} Substituting $N = 2$, $M = 1$ together with the definition of non-simple roots one recovers $q$- deformed Serre relations \begin{equation} \mathcal{E}_{i}^{2}\mathcal{E}_{j} - (q+q^{-1})\mathcal{E}_{i}\mathcal{E}_{j}\mathcal{E}_{i} + \mathcal{E}_{j}\mathcal{E}_{i}^{2} = 0, \end{equation} for $a_{ij} = -1$, $1\le i,j\le r$. Analogous relation is true for $\mathcal{F}_{i}$, $\mathcal{F}_{j}$. \end{cor} $\noindent {\it Proof}. $ $$ \mathcal{E}_{i}^{\imath s}\mathcal{E}_{j}^{\imath t} = e^{-\pi\imath b^{2}st}\int dz e^{\frac{\pi\imath b^{2}z^{2}}{2}-\pi bQz} \frac{G_{b}(-\imath bs+\imath bz)G_{b}(-\imath bz)}{G_{b}(-\imath bs)}\frac{G_{b}(-\imath bt+\imath bz)}{G_{b}(-\imath bt)} \mathcal{E}_{j}^{\imath t-\imath z}\mathcal{E}_{ij}^{\imath z}\mathcal{E}_{i}^{\imath s-\imath z}. $$ Let $\imath s = N$. Using delta distribution formula $$ \frac{G_{b}(x)G_{b}(-Nb-x)}{G_{b}(-Nb)} = \sum\limits_{n=0}^{N}\frac{\prod\limits_{k=1}^{N}(1-q^{-2k})}{\prod\limits_{k=1}^{n}(1-q^{-2k})\prod\limits_{k=1}^{N-n}(1-q^{-2k})} \delta(x+nb), $$ we evaluate the integral $$ \mathcal{E}_{i}^{N}\mathcal{E}_{j}^{\imath t} = e^{-\pi b^{2}Nt}\int dz e^{\frac{\pi\imath b^{2}z^{2}}{2}-\pi bQz} \frac{G_{b}(-Nb+\imath bz)G_{b}(-\imath bz)}{G_{b}(-Nb)}\frac{G_{b}(-\imath bt+\imath bz)}{G_{b}(-\imath bt)} \mathcal{E}_{j}^{\imath t-\imath z}\mathcal{E}_{ij}^{\imath z}\mathcal{E}_{i}^{N-\imath z} = $$ $$ e^{-\pi b^{2}Nt}\sum\limits_{n=0}^{N}\frac{\prod\limits_{k=1}^{N}(1-q^{-2k})}{\prod\limits_{k=1}^{n}(1-q^{-2k})\prod\limits_{k=1}^{N-n}(1-q^{-2k})} \int dz e^{\frac{\pi\imath b^{2}z^{2}}{2}-\pi bQz}\delta(-\imath bz+nb)\frac{G_{b}(-\imath bt+\imath bz)}{G_{b}(-\imath bt)} \mathcal{E}_{j}^{\imath t-\imath z}\mathcal{E}_{ij}^{\imath z}\mathcal{E}_{i}^{N-\imath z} = $$ $$ e^{-\pi b^{2}Nt}\sum\limits_{n=0}^{N}(-1)^{n}q^{-\frac{n^{2}}{2}+n} \frac{\prod\limits_{k=1}^{N}(1-q^{-2k})}{\prod\limits_{k=1}^{n}(1-q^{-2k})\prod\limits_{k=1}^{N-n}(1-q^{-2k})} \frac{G_{b}(-\imath bt+nb)}{G_{b}(-\imath bt)}\mathcal{E}_{j}^{\imath t-n}\mathcal{E}_{ij}^{n}\mathcal{E}_{i}^{N-n}. $$ Using functional equation for $G_{b}(z)$ \begin{equation} \frac{G_{b}(x+nb)}{G_{b}(x)} = \prod\limits_{k =0}^{n-1}(1-q^{2k}e^{2\pi\imath bx}), \end{equation} we obtain $$ \mathcal{E}_{i}^{N}\mathcal{E}_{j}^{\imath t} = e^{-\pi b^{2}Nt}\sum\limits_{n=0}^{N}(-1)^{n}q^{-\frac{n^{2}}{2}+n} \frac{\prod\limits_{k=1}^{N}(1-q^{-2k})\prod\limits_{k=0}^{n-1}(1-q^{2k}e^{2\pi b^{2}t})}{\prod\limits_{k=1}^{n}(1-q^{-2k})\prod\limits_{k=1}^{N-n}(1-q^{-2k})} \mathcal{E}_{j}^{\imath t-n}\mathcal{E}_{ij}^{n}\mathcal{E}_{i}^{N-n}. $$ By taking $\imath t = M$ we prove the corollary $$ \mathcal{E}_{i}^{N}\mathcal{E}_{j}^{M} = q^{NM} \sum\limits_{n=0}^{N}(-1)^{n}q^{-\frac{n^{2}}{2}+n} \frac{\prod\limits_{k=1}^{N}(1-q^{-2k})\prod\limits_{k=0}^{n-1}(1-q^{-2(M-k)})} {\prod\limits_{k=1}^{n}(1-q^{-2k})\prod\limits_{k=1}^{N-n}(1-q^{-2k})} \mathcal{E}_{j}^{M-n}\mathcal{E}_{ij}^{n}\mathcal{E}_{i}^{N-n} = $$ $$ q^{NM}\sum\limits_{n=0}^{min(N,M)}(-1)^{n}q^{-\frac{n^{2}}{2}+n} \frac{\prod\limits_{k=1}^{N}(1-q^{-2k})\prod\limits_{k=1}^{M}(1-q^{-2k})} {\prod\limits_{k=1}^{N-n}(1-q^{-2k})\prod\limits_{k=1}^{M-n}(1-q^{-2k})\prod\limits_{k=1}^{n}(1-q^{-2k})} \mathcal{E}_{j}^{M-n}\mathcal{E}_{ij}^{n}\mathcal{E}_{i}^{N-n}. $$ Now, let $\imath t = b^{-2}$, $N = 1$ to prove the second corollary $$ \mathcal{E}_{i}\mathcal{E}_{j}^{b^{-2}} = -(\mathcal{E}_{j}^{b^{-2}}\mathcal{E}_{i} - q^{-\frac{1}{2}+1}(1-e^{-2\pi\imath})\mathcal{E}_{j}^{b^{-2}-1}\mathcal{E}_{ij}) = -\mathcal{E}_{j}^{b^{-2}}\mathcal{E}_{i}. $$ Analogously one can prove the same relations for $\mathcal{F}_{j}$, $1\le j\le r$. $\Box$ In the proof of the previous corollary we have also obtained the following \begin{cor} Let $a_{ij} = -1$ and let $\tilde{\mathcal{E}}_{j} = \mathcal{E}_{j}^{b^{-2}}$, $\tilde{\mathcal{F}}_{j} = \mathcal{F}_{j}^{b^{-2}}$, $1\le j\le r$. Then the following identities hold \begin{equation} \mathcal{E}_{i}\tilde{\mathcal{E}}_{j} = -\tilde{\mathcal{E}}_{j}\mathcal{E}_{i}, \end{equation} \begin{equation} \mathcal{F}_{i}\tilde{\mathcal{F}}_{j} = -\tilde{\mathcal{F}}_{j}\mathcal{F}_{i}. \end{equation} \end{cor} \begin{cor} Let $a_{ij} = -1$ and let $\tilde{\mathcal{E}}_{j} = \mathcal{E}_{j}^{b^{-2}}$, $\tilde{\mathcal{F}}_{j} = \mathcal{F}_{j}^{b^{-2}}$, $\tilde{\mathcal{E}}_{ij} = \mathcal{E}_{ij}^{b^{-2}}$, $\tilde{\mathcal{F}}_{ij} = \mathcal{F}_{ij}^{b^{-2}}$, $1\le i,j\le r$. Then the following identities hold \begin{equation} \tilde{\mathcal{E}}_{i}^{N}\tilde{\mathcal{E}}_{j}^{M} = \tilde{q}^{NM}\sum\limits_{n=0}^{min(N,M)}(-1)^{n}\tilde{q}^{-\frac{n^{2}}{2}+n} \frac{\prod\limits_{k=1}^{N}(1-\tilde{q}^{-2k})\prod\limits_{k=1}^{M}(1-\tilde{q}^{-2k})} {\prod\limits_{k=1}^{N-n}(1-\tilde{q}^{-2k})\prod\limits_{k=1}^{M-n}(1-\tilde{q}^{-2k})\prod\limits_{k=1}^{n}(1-\tilde{q}^{-2k})} \tilde{\mathcal{E}}_{j}^{M-n}\tilde{\mathcal{E}}_{ij}^{n}\tilde{\mathcal{E}}_{i}^{N-n}, \end{equation} \begin{equation} \tilde{\mathcal{F}}_{i}^{N}\tilde{\mathcal{F}}_{j}^{M} = \tilde{q}^{NM}\sum\limits_{n=0}^{min(N,M)}(-1)^{n}\tilde{q}^{-\frac{n^{2}}{2}+n} \frac{\prod\limits_{k=1}^{N}(1-\tilde{q}^{-2k})\prod\limits_{k=1}^{M}(1-\tilde{q}^{-2k})} {\prod\limits_{k=1}^{N-n}(1-\tilde{q}^{-2k})\prod\limits_{k=1}^{M-n}(1-\tilde{q}^{-2k})\prod\limits_{k=1}^{n}(1-\tilde{q}^{-2k})} \tilde{\mathcal{F}}_{j}^{M-n}\tilde{\mathcal{F}}_{ij}^{n}\tilde{\mathcal{F}}_{i}^{N-n}. \end{equation} Taking in these formulas $N = M = 1$ one obtains the following expressions for the dual non-simple roots \begin{equation} \tilde{\mathcal{E}}_{ij} = \frac{\tilde{q}^{\frac{1}{2}}\tilde{\mathcal{E}}_{j}\tilde{\mathcal{E}}_{i}-\tilde{q}^{-\frac{1}{2}}\tilde{\mathcal{E}}_{i}\tilde{\mathcal{E}}_{j}}{\tilde{q}-\tilde{q}^{-1}}, \end{equation} \begin{equation} \tilde{\mathcal{F}}_{ij} = \frac{\tilde{q}^{\frac{1}{2}}\tilde{\mathcal{F}}_{j}\tilde{\mathcal{F}}_{i}-\tilde{q}^{-\frac{1}{2}}\tilde{\mathcal{F}}_{i}\tilde{\mathcal{F}}_{j}}{\tilde{q}-\tilde{q}^{-1}}. \end{equation} Taking $N = 2$, $M = 1$ and using these expressions one obtains the dual $q$-deformed Serre relations \begin{equation} \tilde{\mathcal{E}}_{i}^{2}\tilde{\mathcal{E}}_{j} - (\tilde{q}+\tilde{q}^{-1})\tilde{\mathcal{E}}_{i}\tilde{\mathcal{E}}_{j}\tilde{\mathcal{E}}_{i} + \tilde{\mathcal{E}}_{j}\tilde{\mathcal{E}}_{i}^{2} = 0, \end{equation} for $a_{ij} = -1$, $1\le i,j\le r$. Similar relation is valid for $\tilde{\mathcal{F}}_{i}$, $\tilde{\mathcal{F}}_{j}$. \end{cor} $\noindent {\it Proof}. $ The proof is similar to the proof of analogous relation for untilded generators. $\Box$ \begin{cor} The following formulas for coproduct hold \begin{equation} \Delta K_{j} = K_{j}\otimes K_{j}, \end{equation} \begin{equation} \Delta \mathcal{E}_{j} = \mathcal{E}_{j}\otimes 1 + K_{j}^{-1}\otimes \mathcal{E}_{j}, \end{equation} \begin{equation} \Delta \mathcal{F}_{j} = 1\otimes \mathcal{F}_{j} + \mathcal{F}_{j}\otimes K_{j}, \end{equation} \begin{equation} \Delta \tilde{K}_{j} = \tilde{K}_{j}\otimes \tilde{K}_{j}, \end{equation} \begin{equation} \Delta \tilde{\mathcal{E}}_{j} = \tilde{\mathcal{E}}_{j}\otimes 1 + \tilde{K}_{j}^{-1}\otimes \tilde{\mathcal{E}}_{j}, \end{equation} \begin{equation} \Delta \tilde{\mathcal{F}}_{j} = 1\otimes \tilde{\mathcal{F}}_{j} + \tilde{\mathcal{F}}_{j}\otimes \tilde{K}_{j}. \end{equation} \end{cor} $\noindent {\it Proof}. $ Coproduct for generators $K_{j}$, $1\le j\le r$ are trivial consequences of the formula (4.18) for $\imath p$ equal to $1$ and $b^{-2}$. The rest expressions follow from (4.19), (4.20) for $\imath s$, $\imath t$ equal to $1$ and $b^{-2}$ by evaluating the integral using the delta distribution formula (\ref{delta}). $\Box$ \section{Modular double $M_{q\tilde{q}}(\mathfrak{g})$} Let $q = e^{\pi\imath b^{2}}$, $\tilde{q} = e^{\pi\imath b^{-2}}$, $b^{2}\in \mathbb{R}\setminus\mathbb{Q}$ and let $K_{j}$, $E_{j}$, $F_{j}$, $1\le j\le r$ be generators of quantum group $U_{q}(\mathfrak{g})$. The rescaled generators are defined by \begin{equation} \mathcal{E}_{j} = -\imath (q-q^{-1})E_{j}, \end{equation} \begin{equation} \mathcal{F}_{j} = -\imath (q-q^{-1})F_{j}. \end{equation} The second set of generators $\tilde{K}_{j}$, $\tilde{E}_{j}$, $\tilde{F}_{j}$, $1\le j\le r$ is defined as follows \begin{equation}\label{first transcendental relation} \tilde{K}_{j} = K_{j}^{b^{-2}}, \end{equation} \begin{equation} \tilde{E}_{j} = \frac{\imath}{\tilde{q}-\tilde{q}^{-1}}\mathcal{E}_{j}^{b^{-2}}, \end{equation} \begin{equation}\label{last transcendental relation} \tilde{F}_{j} = \frac{\imath}{\tilde{q}-\tilde{q}^{-1}}\mathcal{F}_{j}^{b^{-2}}. \end{equation} \begin{te} The elements $K_{i}$, $E_{j}$, $F_{j}$ and $\tilde{K}_{i}$, $\tilde{E}_{j}$, $\tilde{F}_{j}$, $1\le j\le r$ generate a Hopf subalgebra of $A(\mathfrak{g})$ with the first set of generators satisfying the relations of $U_{q}(\mathfrak{g})$ and the second set of generators satisfying the relations of $U_{\tilde{q}}(\mathfrak{g})$ and both sets of generators satisfying the cross-relations: \begin{equation}\label{first cross-relation} K_{i}\tilde{K}_{j} = \tilde{K}_{j}K_{i}, \end{equation} \begin{equation} K_{i}\tilde{E}_{j} = (-1)^{a_{ij}}\tilde{E}_{j}K_{i}, \end{equation} \begin{equation} \tilde{K}_{i}E_{j} = (-1)^{a_{ij}}E_{j}\tilde{K}_{i}, \end{equation} \begin{equation} K_{i}\tilde{F}_{j} = (-1)^{a_{ij}}\tilde{F}_{j}K_{i}, \end{equation} \begin{equation} \tilde{K}_{i}F_{j} = (-1)^{a_{ij}}F_{j}\tilde{K}_{i}, \end{equation} \begin{equation} E_{i}\tilde{E}_{j} = (-1)^{a_{ij}}\tilde{E}_{j}E_{i}, \end{equation} \begin{equation} F_{i}\tilde{F}_{j} = (-1)^{a_{ij}}\tilde{F}_{j}F_{i}, \end{equation} \begin{equation} E_{i}\tilde{F}_{j} = \tilde{F}_{j}E_{i}, \end{equation} \begin{equation}\label{last cross-relation} \tilde{E}_{i}F_{j} = F_{j}\tilde{E}_{i}, \end{equation} where $1\le i,j\le r$. \end{te} $\noindent {\it Proof}. $ The results of the Corollaries 3.1-3.3 and Corollaries 4.1-4.9 give the statement of the theorem. $\Box$ The defining relations of modular double have been observed in representations in \cite{F2} for the case of $U_{q}(\mathfrak{sl}(2))$, in \cite{GKL2} for the case of $U_{q}(\mathfrak{gl}(N))$ and in \cite{Ip2} for the case of other Lie algebras $\mathfrak{g}$. \begin{de} Modular double $M_{q\tilde{q}}(\mathfrak{g})$ is a Hopf algebra generated by generators $K_{i}$, $E_{j}$, $F_{j}$, $1\le j\le r$ satisfying the relations of $U_{q}(\mathfrak{g})$ and generators $\tilde{K}_{i}$, $\tilde{E}_{j}$, $\tilde{F}_{j}$, $1\le j\le r$ satisfying the relations of $U_{\tilde{q}}(\mathfrak{g})$ with the cross-relations (\ref{first cross-relation}-\ref{last cross-relation}). The transcendental relations (\ref{first transcendental relation}-\ref{last transcendental relation}) are not imposed. \end{de} Note, that in this algebraic definition of modular double we do not require transcendental relations between two sets of generators. The modular double defined in such a way has two types of representations. Below we give an example of the representation of the modular double $M_{q\tilde{q}}(\mathfrak{gl}(N))$ for which there are no transcendental relations. \\* Let us introduce more general parametrization $q = e^{\frac{\pi\imath\omega_{1}}{\omega_{2}}}$, $\tilde{q} = e^{\frac{\pi\imath\omega_{2}}{\omega_{1}}}$. This notation is reduced to the one we used in this paper if we set $\omega_{1} = b$, $\omega_{2} = b^{-1}$. $U_{q}(\mathfrak{gl}(N))$ is generated by the elements $K_{n}$, $n = 1,...,N$ and $E_{n,n+1}$, $E_{n+1,n}$, $n = 1,...,N-1$ subjected to the following set of relations \begin{equation} E_{n,n+1}E_{m+1,m} -E_{m+1,m}E_{n,n+1} = \delta_{nm}\frac{K_{n}K_{n+1}^{-1}-K_{n}^{-1}K_{n+1}}{q-q^{-1}}, \end{equation} \begin{equation} K_{n}E_{m,m+1} = q^{\delta_{nm}-\delta_{n,m+1}}E_{m,m+1}K_{n}, \end{equation} \begin{equation} K_{n}E_{m+1,m} = q^{\delta_{n,m+1}-\delta_{nm}}E_{m+1,m}K_{n}, \end{equation} together with quantum analogues of Serre relations \begin{equation} E_{n,n+1}E_{m,m+1} - E_{m,m+1}E_{n,n+1} =0, m\ne n\pm 1, \end{equation} \begin{equation} E_{n,n+1}^{2}E_{n+1,n+2} - (q+q^{-1})E_{n,n+1}E_{n+1,n+2}E_{n,n+1}+E_{n+1,n+2}E_{n,n+1}^{2} =0, \end{equation} \begin{equation} E_{n+1,n+2}^{2}E_{n,n+1} - (q+q^{-1})E_{n+1,n+2}E_{n,n+1}E_{n+1,n+2}+E_{n,n+1}E_{n+1,n+2}^{2} =0, \end{equation} \begin{equation} E_{n+1,n}E_{m+1,m} - E_{m+1,m}E_{n+1,n} =0, m\ne n\pm 1, \end{equation} \begin{equation} E_{n+1,n}^{2}E_{n+2,n+1} - (q+q^{-1})E_{n+1,n}E_{n+2,n+1}E_{n+1,n}+E_{n+2,n+1}E_{n+1,n}^{2} =0, \end{equation} \begin{equation} E_{n+2,n+1}^{2}E_{n+1,n} - (q+q^{-1})E_{n+2,n+1}E_{n+1,n}E_{n+2,n+1}+E_{n+1,n}E_{n+2,n+1}^{2} =0. \end{equation} Analogously, the dual quantum group $U_{\tilde{q}}(\mathfrak{gl}(N))$ is generated by the elements $\tilde{K}_{n}$, $n = 1,...,N$ and $\tilde{E}_{n,n+1}$, $\tilde{E}_{n+1,n}$, $n = 1,...,N-1$ subjected to the relations \begin{equation} \tilde{E}_{n,n+1}\tilde{E}_{m+1,m} -\tilde{E}_{m+1,m}\tilde{E}_{n,n+1} = \delta_{nm}\frac{\tilde{K}_{n}\tilde{K}_{n+1}^{-1}-\tilde{K}_{n}^{-1}\tilde{K}_{n+1}}{\tilde{q}-\tilde{q}^{-1}}, \end{equation} \begin{equation} \tilde{K}_{n}\tilde{E}_{m,m+1} = \tilde{q}^{\delta_{nm}-\delta_{n,m+1}}\tilde{E}_{m,m+1}\tilde{K}_{n}, \end{equation} \begin{equation} \tilde{K}_{n}\tilde{E}_{m+1,m} = \tilde{q}^{\delta_{n,m+1}-\delta_{nm}}\tilde{E}_{m+1,m}\tilde{K}_{n}, \end{equation} together with quantum analogues of Serre relations \begin{equation} \tilde{E}_{n,n+1}\tilde{E}_{m,m+1} - \tilde{E}_{m,m+1}\tilde{E}_{n,n+1} =0, m\ne n\pm 1, \end{equation} \begin{equation} \tilde{E}_{n,n+1}^{2}\tilde{E}_{n+1,n+2} - (\tilde{q}+\tilde{q}^{-1})\tilde{E}_{n,n+1}\tilde{E}_{n+1,n+2}\tilde{E}_{n,n+1}+\tilde{E}_{n+1,n+2}\tilde{E}_{n,n+1}^{2} =0, \end{equation} \begin{equation} \tilde{E}_{n+1,n+2}^{2}\tilde{E}_{n,n+1} - (\tilde{q}+\tilde{q}^{-1})\tilde{E}_{n+1,n+2}\tilde{E}_{n,n+1}\tilde{E}_{n+1,n+2}+\tilde{E}_{n,n+1}\tilde{E}_{n+1,n+2}^{2} =0, \end{equation} \begin{equation} \tilde{E}_{n+1,n}\tilde{E}_{m+1,m} - \tilde{E}_{m+1,m}\tilde{E}_{n+1,n} =0, m\ne n\pm 1, \end{equation} \begin{equation} \tilde{E}_{n+1,n}^{2}\tilde{E}_{n+2,n+1} - (\tilde{q}+\tilde{q}^{-1})\tilde{E}_{n+1,n}\tilde{E}_{n+2,n+1}\tilde{E}_{n+1,n}+\tilde{E}_{n+2,n+1}\tilde{E}_{n+1,n}^{2} =0, \end{equation} \begin{equation} \tilde{E}_{n+2,n+1}^{2}\tilde{E}_{n+1,n} - (\tilde{q}+\tilde{q}^{-1})\tilde{E}_{n+2,n+1}\tilde{E}_{n+1,n}\tilde{E}_{n+2,n+1}+\tilde{E}_{n+1,n}\tilde{E}_{n+2,n+1}^{2} =0. \end{equation} The modular double $M_{q\tilde{q}}(\mathfrak{gl}(N))$ is generated by the generators of $U_{q}(\mathfrak{gl}(N))$ and generators of $U_{\tilde{q}}(\mathfrak{gl}(N))$ with the following cross-relations: \begin{equation} E_{n,n+1}\tilde{K}_{m} = (-1)^{\delta_{nm}+\delta_{n,m-1}}\tilde{K}_{m}E_{n,n+1}, \end{equation} \begin{equation} E_{n+1,n}\tilde{K}_{m} = (-1)^{\delta_{nm}+\delta_{n,m-1}}\tilde{K}_{m}E_{n+1,n}, \end{equation} \begin{equation} E_{n,n+1}\tilde{E}_{m,m+1} = (-1)^{\delta_{n,m+1}+\delta_{n+1,m}}\tilde{E}_{m,m+1}E_{n,n+1}, \end{equation} \begin{equation} E_{n,n+1}\tilde{E}_{m+1,m} = \tilde{E}_{m+1,m}E_{n,n+1}, \end{equation} \begin{equation} E_{n+1,n}\tilde{E}_{m+1,m} = (-1)^{\delta_{n,m-1}+\delta_{n-1,m}}\tilde{E}_{m+1,m}E_{n+1,n}, \end{equation} \begin{equation} \tilde{E}_{n,n+1}K_{m} = (-1)^{\delta_{nm}+\delta_{n,m-1}}K_{m}\tilde{E}_{n,n+1}, \end{equation} \begin{equation} \tilde{E}_{n+1,n}K_{m} = (-1)^{\delta_{nm}+\delta_{n,m-1}}K_{m}\tilde{E}_{n+1,n}, \end{equation} \begin{equation} \tilde{E}_{n,n+1}E_{m+1,m} = E_{m+1,m}\tilde{E}_{n,n+1}. \end{equation} \begin{te}\cite{GKL2} Let $q = e^{\frac{\pi\imath\omega_{1}}{\omega_{2}}}$ and $\tilde{q} = e^{\frac{\pi\imath\omega_{2}}{\omega_{1}}}$. The following operators define a representation of modular double $M_{q\tilde{q}}(\mathfrak{gl}(N))$ \begin{equation} E_{n,n+1} = \frac{2e^{\frac{\pi\imath(\omega_{1}+\omega_{2})(n-1)}{2\omega_{2}}}}{q-q^{-1}} \sum\limits_{j = 1}^{n} \frac{\prod\limits_{r = 1}^{n+1}\sinh\frac{\pi}{\omega_{2}}(\gamma_{nj}-\gamma_{n+1,r}-\frac{\imath}{2}(\omega_{1}+\omega_{2}))} {\prod\limits_{s \ne j}^{n}\sinh\frac{\pi}{\omega_{2}}(\gamma_{nj}-\gamma_{ns})} e^{-\imath\omega_{1}\partial_{\gamma_{nj}}}, \end{equation} \begin{equation} E_{n+1,n} = \frac{2e^{-\frac{\pi\imath(\omega_{1}+\omega_{2})(n-1)}{2\omega_{2}}}}{q-q^{-1}} \sum\limits_{j=1}^{n} \frac{\prod\limits_{r=1}^{n-1} \sinh\frac{\pi}{\omega_{2}}(\gamma_{nj}-\gamma_{n-1,r}+\frac{\imath}{2}(\omega_{1}+\omega_{2}))} {\prod\limits_{s\ne j}^{n}\sinh\frac{\pi}{\omega_{2}}(\gamma_{nj}-\gamma_{ns})} e^{\imath\omega_{1}\partial_{\gamma_{nj}}}, \end{equation} \begin{equation} K_{n} = e^{\frac{\pi}{\omega_{2}}(\sum\limits_{j=1}^{n}\gamma_{nj}-\sum\limits_{j=1}^{n-1}\gamma_{n-1,j})}, \end{equation} \begin{equation} \tilde{E}_{n,n+1} = \frac{2e^{\frac{\pi\imath(\omega_{1}+\omega_{2})(n-1)}{2\omega_{1}}}}{\tilde{q}-\tilde{q}^{-1}} \sum\limits_{j = 1}^{n} \frac{\prod\limits_{r = 1}^{n+1}\sinh\frac{\pi}{\omega_{1}}(\gamma_{nj}-\gamma_{n+1,r}-\frac{\imath}{2}(\omega_{1}+\omega_{2}))} {\prod\limits_{s \ne j}^{n}\sinh\frac{\pi}{\omega_{1}}(\gamma_{nj}-\gamma_{ns})} e^{-\imath\omega_{2}\partial_{\gamma_{nj}}}, \end{equation} \begin{equation} \tilde{E}_{n+1,n} = \frac{2e^{-\frac{\pi\imath(\omega_{1}+\omega_{2})(n-1)}{2\omega_{1}}}}{\tilde{q}-\tilde{q}^{-1}} \sum\limits_{j=1}^{n} \frac{\prod\limits_{r=1}^{n-1} \sinh\frac{\pi}{\omega_{1}}(\gamma_{nj}-\gamma_{n-1,r}+\frac{\imath}{2}(\omega_{1}+\omega_{2}))} {\prod\limits_{s\ne j}^{n}\sinh\frac{\pi}{\omega_{1}}(\gamma_{nj}-\gamma_{ns})} e^{\imath\omega_{2}\partial_{\gamma_{nj}}}, \end{equation} \begin{equation} \tilde{K}_{n} = e^{\frac{\pi}{\omega_{1}}(\sum\limits_{j=1}^{n}\gamma_{nj}-\sum\limits_{j=1}^{n-1}\gamma_{n-1,j})}. \end{equation} \end{te} $\noindent {\it Proof}. $ This is the same representation as was introduced in \cite{GKL2}, (sections 3.2-3.3) if one replaces $2\pi$ by $\pi$. After this replacement it is simple to check that the cross-relations instead of commutativity relations appear. $\Box$ There is another type of realization of the principal series representations \cite{FrIp}, \cite{Ip2} which are $q$-analogues of principal series representations of universal enveloping algebra of semisimple Lie algebra $\mathfrak{g}$ in Lusztig's parametrization. The classical limit of these representations was introduced earlier in \cite{GLO} in sections 2.4.1-2.4.4 for classical series of Lie algebras. \section{Appendix} \subsection{Quantum dilogarithm and its properties} The basic properties of non-compact quantum dilogarithm/double sine listed below are extracted mainly from \cite{KLSTS}, \cite{BT}, \cite{V}. Introduce the following notation $q = e^{\pi\imath b^{2}}$, $\tilde{q} = e^{\pi\imath b^{-2}}$, $Q = b+b^{-1}$, $\zeta_{b} = e^{\frac{\pi\imath}{4} + \frac{\pi\imath(b^{2}+b^{-2})}{12}}$.\\* \\* \textbf{The integral representation of $G_{b}(z)$:} \begin{equation} \log G_{b}(z) = \log\bar{\zeta}_{b} - \int\limits_{\mathbb{R}+\imath 0} \frac{dt}{t}\frac{e^{zt}}{(1-e^{bt})(1-e^{b^{-1}t})}. \end{equation} \textbf{Noncompact analog of $q$-exponential $g_{b}(z)$:} \begin{equation} g_{b}(z) = \frac{\bar{\zeta}_{b}}{G_{b}(\frac{Q}{2}+\frac{1}{2\pi\imath b}\log z)}. \end{equation} \textbf{Product representation:} \begin{equation} G_{b}(x) = \bar{\zeta}_{b}\frac{\prod\limits_{n=1}^{\infty}(1-e^{2\pi\imath b^{-1}(x-nb^{-1})})}{\prod\limits_{n=0}^{\infty}(1-e^{2\pi\imath b(x+nb)})}, \end{equation} \begin{equation} g_{b}(x) = \frac{\prod\limits_{n=0}^{\infty}(1+xq^{2n+1})}{\prod\limits_{n=0}^{\infty}(1+x^{b^{-2}}\tilde{q}^{-2n-1})}. \end{equation} \textbf{Functional equations:} \begin{equation} G_{b}(x +b^{\pm 1}) = (1-e^{2\pi\imath b^{\pm 1}x})G_{b}(x), \end{equation} or more generally \begin{equation} \label{func eq} \frac{G_{b}(x+n_{1}b+n_{2}b^{-1})}{G_{b}(x)} = \prod\limits_{k_{1} =0}^{n_{1}-1}(1-q^{2k_{1}}e^{2\pi\imath bx})\prod\limits_{k_{2} =0}^{n_{2}-1}(1-\tilde{q}^{2k_{2}}e^{2\pi\imath b^{-1}x}), \end{equation} \begin{equation} g_{b}(q^{-1}x) = (1+x)g_{b}(qx). \end{equation} \textbf{Reflection formula:} \begin{equation}\label{reflection} G_{b}(x)G_{b}(Q-x) = e^{\pi\imath x(x-Q)}. \end{equation} \textbf{Poles and zeros:} \begin{equation}\begin{split}\label{Poles} \lim_{x\rightarrow 0}xG_{b}(x-n_{1}b-n_{2}b^{-1}) = \frac{1}{2\pi}\prod\limits_{k_{1}=1}^{n_{1}}(1-q^{-2k_{1}})^{-1}\prod\limits_{k_{2}=1}^{n_{2}}(1-\tilde{q}^{-2k_{2}})^{-1}, \\ \lim_{x\rightarrow 0}xG_{b}^{-1}(x+Q+n_{1}b+n_{2}b^{-1}) = \\ \frac{1}{2\pi}(-1)^{n_{1}+n_{2}+1}q^{-n_{1}(n_{1}+1)}\tilde{q}^{-n_{2}(n_{2}+1)}\prod\limits_{k_{1}=1}^{n_{1}}(1-q^{-2k_{1}})^{-1}\prod\limits_{k_{2}=1}^{n_{2}}(1-\tilde{q}^{-2k_{2}})^{-1}. \end{split}\end{equation} \textbf{Tau-binomial integral} \cite{FKV},\cite{Ka},\cite{PT}: \begin{equation}\label{tau-integral} \int\limits_{\mathcal{C}} d\tau e^{-2\pi b\beta\tau}\frac{G_{b}(\alpha+\imath b\tau)}{G_{b}(Q+\imath b\tau)} = \frac{G_{b}(\alpha)G_{b}(\beta)}{G_{b}(\alpha+\beta)}, \end{equation} where the contour $\mathcal{C}$ goes along the real axis above the sequences of poles going down and below sequences of poles going up.\\* \\* \textbf{Delta distributions \cite{Ip}:} \begin{equation}\begin{split} \label{delta} \frac{G_{b}(x)G_{b}(-N_{1}b-N_{2}b^{-1}-x)}{G_{b}(-N_{1}b-N_{2}b^{-1})} = \sum\limits_{n_{1}=0}^{N_{1}}\sum\limits_{n_{2}=0}^{N_{2}}\frac{\prod\limits_{k_{1}=1}^{N_{1}}(1-q^{-2k_{1}})}{\prod\limits_{k_{1}=1}^{n_{1}}(1-q^{-2k_{1}})\prod\limits_{k_{1}=1}^{N_{1}-n_{1}}(1-q^{-2k_{1}})} \\ \times \frac{\prod\limits_{k_{2}=1}^{N_{2}}(1-\tilde{q}^{-2k_{2}})}{\prod\limits_{k_{2}=1}^{n_{2}}(1-\tilde{q}^{-2k_{2}})\prod\limits_{k_{2}=1}^{N_{2}-n_{2}}(1-\tilde{q}^{-2k_{2}})} \delta(x+n_{1}b+n_{2}b^{-1}). \end{split}\end{equation} \\* \textbf{ 4-5 relation \cite{V}:} \begin{equation}\label{4-5} \int\limits_{\mathcal{C}} d\tau e^{2\pi\imath(\imath\alpha+\tau)(\imath\beta+\tau)}G_{b}(\alpha-\imath\tau)G_{b}(\beta-\imath\tau)G_{b}(\gamma+\imath\tau)G_{b}(\imath\tau) = \frac{G_{b}(\alpha)G_{b}(\beta)G_{b}(\alpha+\gamma)G_{b}(\beta+\gamma)}{G_{b}(\alpha+\beta+\gamma)}, \end{equation} where the contour $\mathcal{C}$ goes along the real axis above the sequences of poles going down and below sequences of poles going up.\\* \\* \textbf{6-9 identity \cite{V}:} \begin{equation}\begin{split} \label{6-9} \frac{G_{b}(A)G_{b}(B)G_{b}(C)G_{b}(A+D)G_{b}(B+D)G_{b}(C+D)} {G_{b}(A+B+D)G_{b}(A+C+D)G_{b}(B+C+D)} = \\ \int\limits_{\mathcal{C}} d\tau e^{2\pi\imath\tau^{2}-2\pi D\tau} \frac{G_{b}(A+\imath\tau)G_{b}(B+\imath\tau)G_{b}(C+\imath\tau)G_{b}(D-\imath\tau)G_{b}(-\imath\tau)}{G_{b}(A+B+C+D+\imath\tau)}, \end{split}\end{equation} where the contour $\mathcal{C}$ goes along the real axis above the sequences of poles going down and below sequences of poles going up.\\* \\* \textbf{$q$-binomial theorem \cite{BT}:}\\* Let $u$, $v$ be positive self-adjoint operators subject to the relations $uv = q^{2}vu$. Then: \begin{equation}\label{q-binomial} (u+v)^{\imath s} = \int\limits_{\mathcal{C}} d\tau \frac{G_{b}(-\imath b\tau)G_{b}(-\imath bs+\imath b\tau)}{G_{b}(-\imath bs)}u^{\imath s-\imath\tau}v^{\imath\tau}, \end{equation} where the contour $\mathcal{C}$ goes along the real axis above the sequences of poles going down and below sequences of poles going up. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
8,390
{"url":"https:\/\/physics.stackexchange.com\/questions\/280099\/why-is-the-total-heat-capacity-an-intensive-path-function","text":"# Why is the \u201ctotal heat capacity\u201d an intensive path function?\n\nMy textbook states the following:\n\n\"The total heat capacity, $C$ (Heat required to raise the temperature of the system by 1\u00b0C) is an intensive path function. On the other hand, $C_V$(Molar heat capacity at constant volume) and $C_P$ (Molar heat capacity at constant pressure) are intensive but state functions.\"\n\nFirstly, I don't understand why $C$ is an intensive property, especially because it does depend on the mass of the system. It does, however, make sense to say that $C_V$ and $C_P$ are intensive properties, since the heat considered in the calculations involving these two, is the heat per mole of the substance and this quantity will remain constant for any amount of the same substance.\n\nSecondly, I am very confused as to why $C$ is a path function where as $C_V$ and $C_P$ are not. I am not able to understand whether they ought to be path\/state functions, because on the one hand, temperature is a state function, where as heat is only defined for a process.\n\nIn freshman physics, we learned that, when heat is added to a constant volume system, we can write Q = mC\u0394T, where C is called the specific heat capacity. However, when we got more deeply into the basics and learned thermodynamics, we found that this elementary approach is no longer adequate (or precise). We found that Q depends on process path and that, if work W is occurring, this changes things. However, we still wanted C to continue to represent a physical property of the material being processed, and not to depend on process path or whether work is occurring. This is dealt with in thermodynamics by changing the definition of C a little. Rather than associating C with the path dependent heat Q, in thermodynamics, we associate C with parameters relating to the state of the material being processed, in particular specific internal energy U and specific enthalpy H. We define the specific heat capacity at constant volume $C_v$ in terms of the derivative of the specific internal energy U with respect to temperature at constant volume: $$C_v=\\left(\\frac{\\partial U}{\\partial T}\\right)_v\\tag{1}$$ We also found that we could define a specific heat capacity at constant pressure $C_p$ as the derivative of the specific enthalpy H with respect to temperature at constant pressure:$$C_p=\\left(\\frac{\\partial H}{\\partial T}\\right)_p\\tag{2}$$ The question is, \"do either of these definitions reduce to the more elementary version from freshman physics under any circumstances.\" The answer is \"yes.\" From the first law of thermodynamics, we find that, for a closed system of constant volume (no work being done), $Q=m\\Delta U=mC_v\\Delta T$, and, for a closed system experiencing a constant pressure change (with $W=p\\Delta v$), $Q=m\\Delta H=mC_p\\Delta T$. Of course, Eqns. 1 and 2 are much more generally applicable than this.\n\nFor a given process, the heat added divided by the temperature change of the system (I am assuming they are calling this C) varies with the amount of work that is done. Like you, I can't see why they would possibly call this an intensive property, although it is certainly a path function. Maybe intensive is a typo, and they meant extensive.\n\nThe specific heat capacities Cv and Cp are intensive state functions, because they are defined as the partial derivatives of the specific internal energy and the specific enthalpy, respectively, with respect to temperature (the former at constant volume and the latter at constant pressure), and the specific internal energy and specific enthalpy are state functions.\n\n\u2022 Thanks Chester, that extensive\/ intensive thing ( that might be a typo), was giving me a headache, my source is p163 of Schroeder thermal physics. \u2013\u00a0user108787 Sep 14 '16 at 16:11\n\u2022 Oh, but $C_V$ and $C_P$ haven't been defined like that in my textbook. They are defined simply as \"heat capacity divided by number of moles at unit volume\/pressure\", which is why I wasn't getting a clear picture as to why they are state functions, where as $C$ is not. I'll look this up some more. Thanks anyway :-) \u2013\u00a0user106570 Sep 14 '16 at 23:37\n\nIf you divide one extensive property by another extensive property, you arrive at an intensive property.\n\nIf you multiply one extensive property by an intensive property, you arrive at an extensive property. So volume by density is mass.\n\nFrom State Functions, which gives a good introduction.\n\nIf certain property is a state function , keep this rule in mind: is this property or value affected by the path or way taken to establish it? If the answer is no, then it is a state function, if is yes, then it is not a\u00a0state function.\n\nIn thermodynamics, a quantity that is well defined so as to describe the path of a process through the\u00a0equilibrium state\u00a0space of a\u00a0thermodynamic system\u00a0is termed a\u00a0process function,\u00a0or, alternatively, a\u00a0process quantity, or a\u00a0path function. As an example,\u00a0mechanical work\u00a0and\u00a0heat\u00a0are process functions because they describe quantitatively the transition between equilibrium states of a thermodynamic system.\n\nPath functions depend on the path taken to reach one state from another. Different routes give different quantities. Examples of path functions include work,\u00a0heat\u00a0and\u00a0arc length. In contrast to path functions,\u00a0state functions\u00a0are independent of the path taken. Thermodynamic\u00a0state variables\u00a0are point functions, differing from path functions. For a given state, considered as a point, there is a definite value for each state variable and state function.\n\n\u2022 No problem, there are so many thermodynamic equations and rules, I keep a separate notebook for them. \u2013\u00a0user108787 Sep 14 '16 at 23:50","date":"2021-02-28 19:07:07","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8694758415222168, \"perplexity\": 255.57073710133577}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-10\/segments\/1614178361723.15\/warc\/CC-MAIN-20210228175250-20210228205250-00298.warc.gz\"}"}
null
null
Letter: Morgan Hill not as safe as it used to be By: submitted What happened to the small farm-town that we all knew and grew to love, the town where there was only one high school and everyone knew each other? The small town of Morgan Hill is now experiencing shootings (Police: Man killed in Gilroy shootout led officers to location, Sept. 10) and robberies, as its neighboring city Gilroy is. In my 21 years here, the first shooting I had heard about in our small town was Michael Duran, an 18-year-old who was killed in a hit and run. This greatly impacted our small town. After this occasion it has become more common to hear of shootings around us. Just as occurred on Sept. 8 in Gilroy, with David Lopez and a police officer. Lopez was a suspect in a prior shooting that occurred on Aug. 31, where there was one victim who was left in critical condition. The shooting that occurred on Sept. 8 left the suspect, David Lopez, dead. We need to regulate more gun control in our city/surrounding cities as well as more police officers patrolling the roads we roam every day. There should be no reason an 18-year-old adolescent has access to a gun where they can seriously harm themselves or others. Morgan Hill is a small town with three exits off the freeway. You can easily go from one end of Morgan Hill to the other in no more than 10 minutes. If police officers patrolled more, possibly one time every hour, people would not only feel safer but crimes may go down knowing that there is an officer around. We should not live in fear, we should feel safe driving around town and letting adolescents go out with friends and not having to fear if they will come back home alive. As a lifelong resident of Morgan Hill, we deserve to feel safe in our small town. We need change. Daisy Fernandez This author byline indicates that the post was contributed by a member of the community.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,173
I'm leading with my strengths. I love high contrast black and white and I feel this is where my best work is done. These are all covers for self published games or short stories. Crazy ideas from distant lands that flow through my head. 3d sculpts were done by Justin Stahler based on designs by me. © 2017 by 23rdArchive. All Rights Reserved.
{ "redpajama_set_name": "RedPajamaC4" }
7,249
Q: What did I do wrong while debugging a python script mod for The Sims 4? I was trying to debug a modified python script for The Sims 4, running the mod's code in the game context and trying to access the variables using the pydevd-pycharm package and a python debug server, both provided by PyCharm Pro. Although I followed the necessary instructions and settings (described below), I was still unable to successfully debug. The method I used to perform the debug attempt is as follows: * *Inside the PyCharm Pro installation files (version 2022.3.2), I took a copy of the file "pydevd-pycharm.egg" and changed the extension of that file to ".zip" so that I could edit it; *Inside the python installation files (version 3.7.0), I made a copy of the "ctypes" directory and inserted it inside the "pydevd-pycharm.zip" file created in the previous step; this ".zip" file was inserted into the Mods directory, like any other mod; *I configured a python debug server in Pycharm Pro and created a command for The Sims 4 that contained the code to connect to the debug server. The command was as follows: import sims4.commands @sims4.commands.Command('start.debug', command_type=sims4.commands.CommandType.Live) def startdebugging(_conection=None): import pydevd_pycharm pydevd_pycharm.settrace('localhost', port=5678, stdoutToServer=True, stderrToServer=True) *I also inserted the command as any mod in the Mods directory and started the python debug server (which was waiting for a connection); I minimized PyCharm Pro and started the game; already inside the game, I started the command start.debug; if everything had gone well, I would be debugging the mod script now. (If you want to know more about the method I used, the tutorial link is: https://youtu.be/RBnS8m0174U) In conclusion, i want to know why the method I ran ended up not working and, if possible, suggestions on how I can modify the debug method so that it finally works.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,180
export default class <%= upCaseName %>Controller { constructor() { this.name = '<%= name %>'; } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,767
Robert Lascar, né le à Redon (Ille-et-Vilaine), est un homme d'affaires français, président du groupe de distribution textile Omnium (enseignes Devred et Bouchara). Biographie Robert Lascar est né dans une famille de commerçants. Son grand-père, Léon Lascar, a créé en 1921 un magasin à Brest : maison Léon, renommé en Léon soldeur, une des premières solderies française. Ses 3 fils, Richard, Robert (père) et Edmond, continuent les activités commerciales de leur père. Robert (père) a 4 enfants : Nicole, Robert, Marie-Christine et Philippe. Robert effectue sa scolarité au collège St Sauveur de Redon, et arrête ses études en seconde pour commencer à travailler avec son père. Il acquiert la bosse du commerce en faisant les marchés, puis en travaillant au magasin du quartier Siam à Brest, ensuite à celui de Rennes. En , au décès accidentel de son père, propriétaire d'une dizaine de magasins dans l'Ouest, il lui succède pour gérer l'affaire, en y associant la famille. En 1980, il fonde Eurodif, à l'origine du groupe Omnium. Il décide ensuite de développer le groupe par acquisitions. En 1991, il achète la marque Burton of London, puis en 1992, la marque Bouchara. En 1996, il acquiert l'enseigne Devred, réseau de 100 magasins de prêt-à-porter masculin, puis en 1998 les 118 magasins Maxi-Livres (revendu en 206 pour recentrer les activités). En 2020, le groupe Omnium dirigé par Robert Lascar compte 545 magasins et près de . Autres activités Robert Lascar est membre du Club des Trente, club de réflexion et d'action au service de la Bretagne qui regroupe une soixantaine de grands patrons bretons. Robert Lascar a co-fondé l'association Mécénat Bretagne, association de particuliers et d'entreprises bretonnes ayant pour objet de préserver et enrichir le patrimoine breton. Depuis la montée en Ligue 1 du club de football Stade brestois 29, Robert Lascar est partenaire du club. Robert Lascar a racheté le Comœdia, ancien cinéma de Brest, pour en faire une galerie d'art. Références Liens externes Site du Groupe Omnium Naissance à Redon Naissance en novembre 1944 Homme d'affaires français
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,111
@interface THOverlayView () @end @implementation THOverlayView - (void)awakeFromNib { [super awakeFromNib]; self.backgroundColor = [UIColor clearColor]; self.statusView.backgroundColor = [UIColor colorWithWhite:0.0f alpha:0.5f]; } @end
{ "redpajama_set_name": "RedPajamaGithub" }
2,458
{"url":"https:\/\/www.transtutors.com\/questions\/approximated-net-realizable-value-method-avignon-parfum-compagnie-makes-three-produ-563421.htm","text":"# (Approximated net realizable value method) Avignon Parfum Compagnie makes three products that can... 1 answer below \u00bb\n\n(Approximated net realizable value method) Avignon Parfum Compagnie makes three products that can either be sold, or processed further and then sold. The cost associated with the Avignon joint process is $120,000. Sales Separate Final Units of Prices at Costs after Sales Product Output Split-Off Split-Off Price Product 1 7,500$3.00 $1.00$4.25 Product 2 10,000 2.00 0.50 3.00 Product 3 12,500 2.00 0.75 3.00\n\nPer unit, Product 1 weighs 3 ounces, Product 2 weighs 2 ounces, and Product 3 weighs 3 ounces. Assume that all additional processing is undertaken.\n\na. Allocate the joint cost based on the units of output, weight, and approximated net realizable values at split-off.\n\nb. Assume all products are additionally processed and completed. At the end of the period, the inventories are as follows: Product 1, 500 units; Product 2, 1,000 units; Product 3, 1,500 units. Determine the values of the inventories based on answers obtained in part (a).\n\nRamesh\njoint cost ANSWER (b) METHOD 1 UNITS OF OUTPUT Joint cost per unit Further cost per unit total cost per unit STOCK VALUATIONS Product 1 7,500 30000 4 1 5 2500 Product 2 10,000 40000 4 0.5 4.5 4500 Product 3 12,500 50000 4 0.75 4.75 7125 30,000 joint cost METHOD 2 weight in ounces Joint cost per unit Further cost per unit total cost per unit STOCK VALUATIONS Product 1 22,500 33750 4.5 1 5.5 2750 Product 2 20,000 30000 3 0.5 3.5 3500 Product 3 37,500 56250 4.5 0.75...\n\n## Plagiarism Checker\n\nSubmit your documents and get free Plagiarism report\n\nFree Plagiarism Checker\n\n## Recent Questions in Cost Management\n\nLooking for Something Else? Ask a Similar Question","date":"2020-11-26 09:46:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.29715144634246826, \"perplexity\": 5426.29479820769}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-50\/segments\/1606141187753.32\/warc\/CC-MAIN-20201126084625-20201126114625-00358.warc.gz\"}"}
null
null
Produced by Larry Harrison, Cindy Beyer, Ross Cooling and the online Distributed Proofreaders Canada team at http://www.pgdpcanada.net with images provided by CANADIANA BY THE SAME AUTHOR. THE RAID FROM BEAUSEJOUR, and How the Carter Boys Lifted the Mortgage. Two Stories of Acadie. Illustrated $1 00 REUBE DARE'S SHAD BOAT A Tale of the Tide Country BY CHARLES G. D. ROBERTS [Illustration] NEW YORK: HUNT & EATON CINCINNATI: CRANSTON & CURTS 1895 Copyright by HUNT & EATON, 1895. Composition, electrotyping, printing, and binding by HUNT & EATON, 150 Fifth Ave., New York. CONTENTS. PAGE CHAPTER I. The _Dido_ Goes Adrift 9 CHAPTER II. The Red Bull 21 CHAPTER III. The Chase of the _Dido_ 32 CHAPTER IV. The Cave by the Tide 41 CHAPTER V. A Prison House 53 CHAPTER VI. The Blue Jar 63 CHAPTER VII. Mart Gandy Hacks the Shad Net 75 CHAPTER VIII. A Midnight Visitor 86 CHAPTER IX. The _Dido's_ First Fishing Trip 96 CHAPTER X. Besieged on the Sand Spit 107 CHAPTER XI. Foiling the Sharks 115 CHAPTER XII. The Shot from the Rocks 125 CHAPTER XIII. Gandy is Rescued from the Honey Pots 135 ILLUSTRATIONS. "She's adrift!" he shouted. "Come on! Come on!" The bull swerved slightly and shot past Will marched ahead, carrying the torch It was coin—all coin! Then came the shining, silvery sides of a dozen shad "I think we'll make it," he said to himself Will and Reube bent their bodies to the pull REUBE DARE'S SHAD BOAT. A Tale of the Tide Country. CHAPTER I. The "Dido" Goes Adrift. THE road from Frosty Hollow to Westcock, after climbing the hill by the red creek and passing Mrs. Carter's yellow cottage, ran through a piece of dark and ancient fir woods. With the sighing of the firs there mixed a deeper sound, the voice of the wild tides of the changing Tantramar, unseen and far below. Turning sharply to the right, the road presently emerged from the woods and came upon a very different picture from that which it had left behind. It traversed the face of a long, wide, steep <DW72> of upland, set here and there with a gray or white cottage, here and there a little grove. From the upland foot a mile-wide belt of marsh stretched to the waters of the open bay. The pale-green marsh was divided sharply from the yellow and flashing waves by the long lines of the dike, to which it owed its existence as good dry land. At intervals could be seen small creeks winding through the grassy level. Every creek mouth formed a little haven, clustered about with net reels, and crowded with the boats of the shad fishers. Out from the whispering wood and into the fresh June sunlight of the open came two tallish youths, walking slowly and talking with the joyous zest of old friends who had been long parted. The older-looking of the two was Will Carter, just home from college for the summer vacation. Two years of college life had changed him little. He was the same slim, thoughtful, discreet, yet blithely dauntless lad who had lifted the mortgage from his mother's farm and punished the ruffian Baizley, and softened the hard old heart of Mr. Hand.[A] College study had increased the somewhat scholarly pallor of his face, but college athletics had added poise and grace to the movements of his well-knit muscles. He had hastened home to his mother immediately on the close of the college, leaving his brother Ted to take a month's canoe trip through the inland waters. Will's present companion, Reuben Dare, was a chum only second to Ted in his love. Reube Dare was just eighteen. He was about the same height as Will, but of a much heavier build. His was also a heavier and slower nature, but one of faithful loyalty and courage combined with strong common sense. His hair was light like Will's, but his face was round and ruddy. At a hasty glance one might fancy that he was good-natured to the verge of being "soft," but there was a steady, controlling gleam in his light gray eyes which made folk very slow to presume on his good nature. In fact, his eyes gave one the peculiar impression of having reached full manhood before the rest of his face. He swung his long arms loosely as he walked, and occasionally he stumbled in the ruts, being too much absorbed in watching his comrade's words to note just where he was stepping. It had long been Reube Dare's keenest ambition to put himself through college, but the poverty of his widowed mother—the population of that land of sailors and fishermen is largely made up of widows—had stood sternly in the way. The success of the Carter boys, however, in reclaiming that rich marsh by the creek had proved a strong stimulus, and given him new hopes, with results which this story will show. All at once Will Carter, who had been talking eagerly for the last half hour, stopped short, wiped his forehead, and perched himself on the rail fence under a shady roadside maple. Reube leaned against the fence, and took off his round straw hat. "Now, Reube," said Will, "it's your turn. I've talked myself dry, and gabbled right along like the 'crick' at low water. Your letters, you old oyster, have told me mighty little. What have you been up to all winter?" "Building my shad boat," answered Reube. "Mother told me something about it. It's great, old man!" said Will. "But you don't mean to say you built her all yourself." "Well, pretty near," replied his friend. "Old Chris Boltenhouse helped me with the frame, and set me right whenever I got in a muddle. It was hard work, but I tell you, Will, it was so interesting I could hardly take time to eat. I've thought of nothing else for months, except when I was worrying over mother's eyes, and now—" "I heard about your mother's trouble with her eyes," interrupted Will, sympathetically. "I do hope it's not going to be serious." "Worries me a lot," said Reube, gloomily. And then, his face brightening again, he went on, "But now I've got her done, and rigged and tarred and afloat at Wood Creek landing." "Reube," interrupted Will again, and this time in a tone of severe surprise, "what a singular way to treat your mother! I cannot imagine that dignified lady in any such absurd situation as you speak of." "Come off!" retorted Reuben, very literally, as he caught at Will's ankle and, with a quick twist, jerked him from his perch. "I'm not talking of mother, but of the _Dido_, and I say there's not a trimmer craft will go shad fishing from Westcock this season. I tell you, Will, I've just put my heart into that boat. If it were not for that grove of Barnes's we could see her now, lying with the others, in the mouth of the creek; and even at this distance you could pick her out from the rest." "Well," said Will, "let's get along and inspect her as soon as possible. I'm as tickled about her as if I'd built her myself; and I'm going to help you with the fishing all I can, as my holiday diversion. Did she cost you much? Is she going to _pay_, like _new marsh_?" "If she has a lucky summer," answered Reube—"and they do say there's going to be a great run of shad this season—I'll have her all paid for and quite a lump of money in the bank this fall." "And then!" said Will, in a voice of joyous anticipation. "What then? College with us, for the winter term, anyway! And maybe a scholarship that will still further simplify matters!" "No!" exclaimed Reube, shaking his head gravely. "No college for me till I have had mother away to Boston or New York, to get her eyes properly seen to." Will's face fell a little. "That's so, old man. The eyes must be fixed up first of all, of course. But if the boat's a success, another season will straighten it all out, eh? And when you come to college you'll be a freshman, while I'm a senior! Won't I haze you though?" "Come and practice a bit now!" said Reube, grimly. Will ignored this invitation. "What did you say you called the boat?" he queried. "The _Dido_," answered Reube. "Imagine the stately queen of Carthage going out shad fishing!" chuckled Will. "What struck you to choose that for a name?" "O," said Reube, gravely, "it will serve to keep my aspirations before my mind's eye, even when I am occupied in the prosaic task of splitting shad." At this moment a long, shambling figure was seen climbing a fence some distance down the hill, to the left of our pedestrians. Long, lank black hair fell on his shoulders from beneath a black and greasy slouch hat. Immediately the fellow disappeared in a choke-cherry thicket, after turning a furtive, swarthy face for one moment toward the road. "How's your hereditary enemy behaving himself these days, Reube?" inquired Will. "Well," said Reube, "Mart Gandy's Mart Gandy, same as he always was. But it seems to me that of late he has been troubling his neighbors less and himself more than he used to. They say he's seldom quite sober. He's left us alone pretty much all winter, though he did shoot one of my best sheep in the upper pasture along in the first of the spring." "But didn't you punish him for it?" asked Will, indignantly, glaring back at the cherry trees wherein Gandy had vanished. "I didn't actually catch him, or I would have," said Reube. "And I didn't want to have him taken up, for, bad lot as he is, he does look after his mother and sisters in a kind of a way, and he is all they have to depend on; for his drunken old father has become a regular idiot, doing nothing but sit in the sun, pick at his beard, and whimper for a drink." By this time they had reached the top of a knoll, whence the whole shore line was visible. "There's the _Dido_!" exclaimed Reube, proudly, turning with a sweep of the hand toward the mouth of Wood Creek. But the words ended in a cry of anger and anxiety. "She's adrift!" he shouted. "Come on! Come on! We must catch her before she gets out of the creek. The wind's right down the bay!" As he spoke he vaulted over the fence and started on a run across the fields. Will was at his side in an instant. "How can it have happened?" he asked. "Gandy's work, I'll be bound!" muttered Reube, between his teeth; and his eyes grew pale and bright like steel. ----- [A] Professor Roberts has already told the spirited story of "How the Carter Boys Lifted the Mortgage," in a volume, _The Raid from Beauséjour_, which is published by Hunt & Eaton, New York. [Illustration: "She's adrift!" he shouted. "Come on! Come on!"] CHAPTER II. The Red Bull. THE short cut which Reube was taking across the fields and marshes was calculated to diminish by a good half mile the distance which separated him from his beloved boat. But it was a path beset with obstacles. Will Carter saw all these—the long strip of bog and alders at the foot of the upland; then the gluey stretch of "broad-leaf" marsh, passable enough at a later season, but now a mire with the spring rains; and beyond, furrowing the firm levels of young timothy and clover, the windings of a creek which he knew was, in most places, too wide to jump, and too deep to ford. With what breath he could spare—for his excited comrade was setting a terribly stiff pace—he spasmodically exclaimed, "We'd save time, Reube, by keeping to the road. We'll be tangled up and stuck here the first thing we know; and the _Dido_ will be off on her own hook to seek the ruins of Carthage." But Reuben made no answer. He saw no obstacles. All he could see was the far-off red stream, with the _Dido_, only a little way inside the line of the dikes, veering gently and aimlessly from one green bank to the other, but steadily creeping seaward with the current. Well he knew how soon, with the falling tide, this current would quicken its pace. Once let the _Dido_ get outside the creek, and he knew not what might happen to her. She would certainly be off down the bay at a speed which it appalled him to think of. And now, running in grim silence, Reube and Will drew near the foot of the uplands. Heavily, and with no waste of energy, they flung themselves over a peculiarly massive rail fence, and entered a spacious pasture. The field was dotted with mossy hillocks and a few low spruce bushes, between which the grass grew short and thick. Two or three wide-armed maple trees, standing far apart, relieved the vacancy of the sloping expanse, which ended in a broad fringe of alder swamp, spreading its labyrinth of black roots and bog holes a hundred yards out upon the marsh. As they ran, threading their way among the bushes, and springing from hillock to hillock, they heard an ominous grunting bellow on their right, and turning sharply they saw a large dark-red bull stepping out from under the shade of a maple tree. The animal bellowed again, deep in his throat; and running his horns into the nearest mound, tossed into the air a little shower of turf and moss. This was an honest challenge, but our runners were in no mood to accept it. "This seems to be his bullship's private domain!" panted Will. "I wonder if he's really as mad as he looks, or just bluffing?" "No bluffing there!" muttered Reube, in a voice of anxious concern. "It's Barnes's bull, and he means every word of it! We're in a muss, and we've just got to run for all we're worth. I wish we'd stuck to the road!" As he spoke the bull, seeing his challenge unanswered, charged like a great red thunderbolt. The boys rose into a fine burst of speed; but ere they were halfway across the field Reube felt his legs and wind failing. He vowed inwardly that he would not, could not break down, and he wondered in his heart how Will was holding out. Will was a little ahead, being the lighter runner; but his pace was flagging, and the bull was now gaining upon them with dreadful rapidity. Under fair conditions the fierce and active animal could have given his rivals a hard race; but now, fagged from their long run down the hill, they were no match for him. He was not more than fifty feet behind them, when their course took them right under one of those spreading maples. "No use!" gasped Will. "Up with you, Reube!" And springing desperately into the air, he caught a branch and swung himself up into safety. But Reube was not one who could change his purpose thus rapidly. "The _Dido_!" he groaned; and, pausing under the tree, he glanced irresolutely from the sea to his pursuer. "Come up, quick!" yelled Will, his voice as sharp and inflexible as an ax blade. Reube saw that there was no help for it. His eyes glared fury at his pursuer, as a tiger glares at the hunters when he reluctantly retires before them, and he started to climb the tree. But his stubbornness was all but fatal. He grasped at a branch, and, missing his hold, fell back. He repeated the attempt, this time more eagerly, but again he would have missed and would have felt the bull's horns pinning him to the tree had it not been for Will's readiness of action. Locking his legs between two branches, Will reached down, grasped his comrade under the shoulders, and with a mighty effort swung him around to the other side of the trunk. The bull swerved slightly and shot past. Half climbing, half dragged up by Will, Reube found himself safe among the branches ere the bull had checked its rush and returned to the attack. "You saved me that time, Will," said Reube, in a somewhat shaky voice, grasping his companion's hand and wringing it hard. "But that was an awful grip of yours. I think every finger took a piece out of me!" Will grinned inscrutably, and it flashed across Reube's mind that the severity of the grip had had some connection with his own obstinate delay in seeking safety. But the next instant all else was forgotten in his anxiety about the _Dido_, which was plainly visible through an opening in his leafy refuge. The boat had grounded for a moment on a grassy point, and now the quickening current wrenched her off again and carried her with slow gyrations beyond the very last of the landing slips. Fifteen minutes more, at this rate, and she would be in the open. "I can't stand this, Will! I must try another dash," he groaned. Immediately beneath was the bull, snorting and bellowing, thrusting with his great forehead against the trunk, and pawing the young turf so energetically that it seems as if he aimed at uprooting the tree. "All right, old man," said Will. "Run right along now, and I'll wait here for you. Or perhaps you will mount the gentle steed beneath us and ride to your destination." To this Reube vouchsafed no answer. He sat silent on his branch, glowering across the marshes, and eating his heart in helpless wrath, while Will, stretched face downward across the limbs, eyed the bull pensively, and cudgeled his brains for a way out of the dilemma. Suddenly he straightened himself with a radiant face, and exclaimed: "I have it, Reube! We'll trick his exasperated bullship and catch the _Dido_ yet!" But while the words were yet on his lips the bull lifted his head high, gazed out across the field for a second or two, and then dashed off at the same terrific gallop which had so nearly proved disastrous to our heroes. He had seen a burly, red-shirted figure traversing the upper corner of his field. It was seldom, indeed, that anyone other than his master, the only man he feared, presumed to enter the precincts of his sway, and here, in one morning, were three trespassers. The bull, blind with rage, charged upon the red-shirted figure, and the red-shirted figure, after facing him for a few seconds, turned and fled for the fence. "It's John Paul! He'll get away safe enough," said Reube. "But what's your plan?" "Got a better one by this time, old man," replied Will, dropping out of the tree—"just to cut while his bullship is otherwise engaged." And side by side the two sped on toward the shelter of the alders. Before they got far the bull, having routed red-shirt and snorted at him loudly through the rails, turned, discovered their flight, and came once more thundering at their heels. But this time he had allowed his rivals too much handicap. Before he could get anywhere near them Will and Reube were among the alders. Once there, the big red bull could not match their speed. He floundered, foaming and grunting, through the shallow pools, and the deeper ones he had to skirt. The boys, on the other hand, sprang lightly from root to hillock, from hillock to elastic, reedy tuft, swinging across the pools on the long, bending stems of the alders, and soon leaving their persecutor far behind. They reached the fence, vaulted it, emerged upon the open marsh, and there before them, still half a mile away, was the _Dido_, wheeling gracefully out from the mouth of the creek. [Illustration: The bull swerved slightly and shot past.] CHAPTER III. The Chase of the "Dido." REUBE uttered a cry of something like despair. "Now, old man, what's the matter with you?" queried Will, reprovingly. "Do you suppose the _Dido's_ gone? Why, you old chump, we'll take one of the other boats and go after her. With this wind we'll catch her before she goes half a dozen miles. She won't get past the Joggins, anyway, I'll bet you a red herring!" Reube's face brightened, beamed broadly, and resumed its old boyish frankness. "Why, that's so!" said he. "That's just what we'll do. What a perfect fool I'd be sometimes, Will, if you didn't keep an eye on me!" That half a mile across the marsh proved a long one owing to the many detours which our runners, now trotting slowly and deliberately, were forced to make by the windings of the full creek. At last they reached the landing place where the _Dido_ had been moored. About the rickety old wharf stood four or five high reels, skeletons of light gray wood wound with the dark-stained folds of the shad nets. The fishing season was right at hand, but had not yet begun. Around the boats and the reels were many half-obliterated footprints, left by the feet of those who had been winding the nets and pitching the seams of the boats. Of fresh tracks there was but one set—the tracks of someone with long, narrow feet, who walked without turning out his toes. To these tracks Reube pointed with grim significance of gesture. "Yes," said Will, "I understand. Did you ever see a plainer signature than Mart Gandy makes with his feet?" The smallest of the fishing boats at the wharf was a light "pinkie"—a name given by the Tantramar fishermen to a special kind of craft with the stern pointed like the stem. The pinkie, painted red and white instead of blackened with tar like the other boats, was a good sailer. She belonged to Barnes, the owner of the red bull; and to Reube's judicial mind it seemed appropriate that she should be taken without leave. There was a further inducement in the fact that she could be got afloat more easily than any of the other boats. The tide had fallen so that her keel was high and dry; and the fine mud of Tantramar gripped it with astonishing tenacity. But after a few minutes of such straining as made the veins stand out on Will's forehead, and brought a redness about Reube's steel-gray eyes, she was afloat. Up went her dainty jib; up went her broad white mainsail; and presently the red-and-white pinkie with Reube at the helm was nimbly threading the sharp curves of the creek. After a succession of short tacks the channel straightened, and heeling far over with the strong wind on her quarter the pinkie ran into the open with the tawny surf hissing at her gunwale. Reube held his course till they were a couple of hundred yards out, dreading some hungry shoals he knew of. Then he let out the sheet, eased up on the tiller, and put the pinkie's head straight down the bay on the _Dido's_ track. Will loosened out the jib, belayed it, and lay down on the cuddy in its shadow. The _Dido_ was out of sight beyond the rocks and high oak trees of Wood Point. A stern chase, as has been said from of old, is a long chase; and while the red-and-white pinkie was scudding before the wind and shearing the yellow waves with her keen bow, Reube and Will had to curb their impatience. They did not even whistle for more wind, for they had all the wind the pinkie could well endure. When their ears had grown used to the slap and crumbling rush of the foam-wave past their gunwale they spoke of Mart Gandy. Reube Dare's father, whose farm adjoined that of the Gandys, had got himself embroiled with old Gandy over the location of the dividing line. While Reube was yet a very small boy old Gandy had pulled down the dilapidated line fence during one of Captain Dare's absences, and had put up a new one which encroached seriously on the Dares' best field. On Captain Dare's return he expostulated with Gandy; and finding expostulation useless he quietly shifted back the fence. Then his ship sailed on a long voyage to the Guano Islands of the Pacific; and while he was scorching off the rainless coasts of northern Peru, Gandy again took possession of the coveted strip of field. From this voyage Captain Dare came back with broken health. He gave up his ship, settled down on the farm overlooking the marshes, and called in the arm of the law to curb old Gandy's aggression. The fence had by this time been moved backward and forward several times, each time leaving behind a redder and more threatening line of wrath. When the case came into court the outcome was a surprise to both contestants. There were rummaging out of old titles and unearthing of old deeds, till Captain Dare's lawyer made it clear not only that Gandy's claim was unfounded, but also that before the dispute arose Gandy had been occupying some three acres of the old Dare property. The original grant, made a hundred years earlier to Captain Dare's grandfather, required that the line should run down the middle of old Gandy's sheep pasture—a worthless tract, but one which now acquired value in Gandy's eye. Down the pasture forthwith was the new fence run, for Captain Dare, fired to obstinacy by his neighbor's wanton aggression, would take no less than his rights. Then, the victory assured to him, the captain died, leaving to his widow and his boy a feud to trouble their peace. The farm was productive, but for some years old Gandy had vexed them with ceaseless and innumerable small annoyances. When the old man sank into imbecility, then his son Mart, a swarthy and furtive stripling, who betrayed the blood of a far-off Indian ancestor, took up the quarrel with new bitterness. In Mart Gandy's dark and narrow soul, which was redeemed from utter worthlessness by his devotion to his family, hatred of the Dares stood as a sacred duty. It was his firm faith that his father had been tricked by a conspiracy between judge, jury, and lawyers. The persistency of his hate and the cunning of his strokes had been a steady check upon the prosperity of Reube and his mother. In answer to a remark of Reube on this subject Will exclaimed, "But you've got him all right this time, old man. There can be no difficulty in identifying those footprints." Reube laughed somewhat sarcastically. "Do you suppose," he inquired, "that the tide is going to leave them as they are while we go after the _Dido_, fetch her back, and then go and get those holes in the mud examined by the authorities?" "Well, perhaps my suggestion was hasty," acknowledged Will. After an hour's run Wood Point was left behind, and there was the _Dido_ not a mile ahead and well inshore. She had been delayed in the eddies of the cove below the Point. Reube gave a shout of joy and twisted his helm to starboard, while Will warned him to look out for the mud flats with which the cove was choked. "O," said Reube, confidently, "I know the place like a book." The red-and-white pinkie was now rapidly overhauling the vagrant craft when a stiff current caught the latter and she began to race along the curve of the farther shore. Reube was anxious to catch her before she should round the next headland, and get back into rough water. The headland was a low, humped promontory of mingled plaster rocks and yellowish sand, without a tree upon its grassy crest. Shifting his course to intercept the _Dido_, Reube steered the pinkie straight for the point. Just then the _Dido_ was seen to give a lurch, stop short, and keel over to the gunwale. "She's run aground!" cried Will. "But we've got her safe and will sail her back on next tide," said Reube, heaving a sigh of relief as he saw that his beloved craft stood still, refusing to be rolled over by the push of the yellow tide upon her ribs. The pinkie was sailing at a great pace. "Better take in the jib, Will," said Reube. Will sprang up to obey. Just as he rose there was a staggering shock. The pinkie buried her nose in a hidden mudbank. The waves piled over her gunwales; the mast bent without breaking, like the brave, tough timber it was; and Will shot overboard headlong into the foam. CHAPTER IV. The Cave by the Tide. ACTING instantly on the impulse of an old sailor, Reube had sprung forward almost with the shock, and started to haul down the mainsail in order to relieve the strain. The next moment, however, while the half-lowered sail was bulging and flapping, he leaped into the bow to help Will. The latter rose with a gasp and stood waist deep, clinging to the bowsprit. His head and arms were bedaubed grotesquely with the mud into which he had plunged with such violence. He gazed sternly at Reube, and exclaimed: "Perhaps you'll claim that you know these mud banks as well as I do! I earnestly hope you may, some day, gain the same intimate knowledge of them!" Then he climbed aboard and finished the furling of the sails, while Reube rolled convulsively in the bottom of the boat, unable to control his laughter. He recovered himself only when Will trod upon him without apology, and threatened to put him overboard. When the sails had been made snug, and the pinkie bailed out, and the mud cleaned with pains from Will's face and hair and garments, there was nothing to do but watch the _Dido_ in the distance and wait for the tide to fall. In another half hour, or a little more, only a waste of red flats and yellow pools separated the two stranded boats. Reube took off his shoes and socks, rolled his trousers up high, and stepped overboard. These precautions were for Will superfluous; so he went as he was, and congratulated himself on being able to defy all hidden clam shells. Before he went, however, he took the precaution to put out the pinkie's anchor, for which Reube derided him. "The pinkie's no Western stern-wheeler, to navigate a field of wet grass!" said he. "I fancy she'll wait here till next tide all right!" "Yes—but then?" queried Will, laconically. "Then," replied Reube, "we'll come back for her with the _Dido_." "There's lots one never knows!" said Will, as he looked carefully to the anchor rope. And as things turned out it was well he did so—a fact which Reube had to acknowledge penitently. The distance between the stranded boats was little more than a quarter of a mile, yet it took the boys some time to traverse it. The bottom of the cove was for the most part a deep and clinging ooze, which took them to the knee at every step, and held their feet with the suction of an airpump. Here and there were patches of hard sand to give them a moment's ease; but here and there, too, were the dreaded "honey pots" for which that part of the coast is noted, and to avoid these they had to go most circumspectly. The "honey pot" is a sort of quicksand in which sand is replaced by slime—a bottomless quagmire which does its work with inexorable certainty and deadly speed. Both Reube and Will knew the strange, ominous olive hue staining the red mud over the mouths of these traps, but they knew, also, that all signs sometimes fail, so they took the boathook with them and prodded their path cautiously. At last, after wading a long, shallow lagoon, the bottom of which was thick with shells, and unfriendly to Reube's bare feet, they reached the runaway _Dido_. Breathless with anxiety, Reube climbed over the side, suddenly imagining all sorts of damage and defilement. But his darling was none the worse for her involuntary cruise. She had shipped some muddy water, but that was all that Reube could grumble at. Gandy had been too shrewd to do anything that might look like malice aforethought. In a trice the trim craft was bailed out and sponged dry. Then Will admired her critically from stem to stern, from top to keel, asking a thousand learned questions by the way, and feeling almost persuaded to build a boat himself. But even this interesting procedure came to an end, and at length the comrades threw themselves down on the cuddy roof, and realized that they were hungry. It was long past their dinner time. The tide was not yet at its lowest ebb, and it would be four or five hours ere they could hope to get the boats again afloat. The only thing they had to eat was a pocketful of dried dulse which Reube had brought with him. This they devoured, and it made them very thirsty. They decided to go ashore and look for a spring. Far away, on the crest of the upland, were some houses, at which they gazed hungrily, but the idea of leaving the _Dido_ and the pinkie for any such long jaunt was not to be entertained for a moment. As they again stepped out into the mud Will repeated the precaution which he had taken in regard to the pinkie. He put out the little anchor, and paid no heed to Reube's derision. To be sure, Reube was both owner and captain, but Will stood not on ceremony. Not far from high-water mark our thirsty explorers found a clear, cold spring bubbling out from beneath a white plaster rock. The water was very hard, carrying a great deal of lime in solution, and Will lectured learnedly on the bad effect it would have upon their stomachs if they drank much of it. As usually happens, however, this theorizing had small force against the very practical fact of their thirst. So they drank till they were perfectly satisfied, and were afterward none the worse. This, Will insisted, was thanks to the abundance of sorrel which they found amid the grass near by, whose acid was kind enough to neutralize the lime which they had swallowed. "But I say," urged Reube, "there are folks back yonder who drink water like this all their lives. The wells in this plaster belt are all hard like this, and some of the people who drink from them live to over ninety." "That proves nothing," said Will, "except that they are a long-lived stock. If they had sense enough to go somewhere else and drink soft water they might live to over a hundred!" Reube cared little for argument, always finding it hard to know whether Will was in earnest or not. He lazily changed the subject. "By the way," he remarked, "now's just the chance to visit the cave at the end of the Point!" "Cave!" cried Will, jumping up from the grass. "What cave? How can there be a cave round here without me knowing it?" "Why, I only heard of it myself last fall," said Reube. "You see, the mouth of it isn't uncovered till near low water; and nobody comes near this point at any time, there being nothing to come for, and the shoals and eddies so troublesome. I've sailed round here a good deal at high and half tide, but no one comes near it when tide's out. You see all the broken rocks scattered away out across the flats from the Point. And as for the "honey pots" between them—well, old Chris Boltenhouse, who told me all about the place last fall, said they were a terror. You couldn't step without getting into one. Chris also told me that the Acadians, at the time of their expulsion, had used the cave as a hiding place for some of their treasures, and that when he was a boy quite a lot of coin and silver ornaments had been found there." "Queer, too," muttered Will, "how things like that drop out of people's minds, come back, and are forgotten again! Well, let's look into the hole while we've got time;" and the two ran hastily to the narrow end of the turf. Over the slippery rocks below tide mark they had to move more deliberately, but in a short time they reached the foot of the promontory and stood on the verge of the flats not half an hour above low water. Very villainous indeed looked the flats, with the olive-hued menace spread over them on every hand. But there was no sign of a cave. Scanning the rocks minutely, our explorers skirted the whole front of the headland, but in vain. Then they started to retrace their steps, inveighing against the falsity of traditions. But now, their faces being turned, the rocky masses took on for them a new configuration, and they discovered a narrow strait, as it were, behind a jutting bowlder. It was a most unlikely-looking place for a cave entrance, but Will poked his nose into it curiously. The next moment he shouted: "Found!" Reube sprang to his side. There, behind the sentinel rock, was a narrow, triangular opening of about the height of a man. Its base, some four feet wide, was thickly silted with mud, and its sides dripped forbiddingly. Will stepped inside, and then turned. "It's darker than Egypt!" he exclaimed. "How are we going to explore it without a light?" "Ah," said Reube in tones of triumph, "I've got ahead this time, Will! I happened to bring a whole bunch of matches from home in my pocket to supply the _Dido's_ cuddy. And I picked up this on the Point when you were running ahead in such a hurry." And he drew a sliver of driftwood pine from under his jacket. "Good for you, old man!" cried Will, joyously. In a second or two the sliver was ablaze, and the explorers plunged into a narrow passage whose floor sloped upward swiftly. [Illustration: Will marched ahead carrying the torch.] CHAPTER V. A Prison House. IN their eagerness they forgot to look around before entering the cave. They forgot to look at the tide, which had already turned and was creeping swiftly over the treacherous levels. They forgot everything except that they were in the cave where once undoubtedly had been Acadian treasures, and where, as each dreamed in his heart and denied on his lips, some remnant of such treasures might yet lie hidden. Will marched ahead carrying the torch and peering with eager enthusiasm into every crevice. The cave was full of crevices, but they were shallow and contained nothing of interest but some fair crystals of selenite, which gleamed like diamonds in the torchlight. A few of these Reube broke off and pocketed as specimens. The cave widened slowly as it ascended, and the <DW72> of its floor kept it well drained in spite of the water ceaselessly dripping from roof and walls. Its shape was roughly triangular, and our explorers sometimes bumped their heads smartly in their haste. Presently they reached a point where a narrow gallery ran off from the main passage. Which to take was the problem. "It seems to me," said Reube, "that if there was any of the old Acadians' stuff here it would be most likely to be hidden in the smaller passage." "Acadians' stuff!" sniffed Will, sarcastically. "A lot of that we'll find!" But, none the less, he acted on Reube's suggestion, and led the way up the side gallery. After running some twenty-five feet the gallery turned a corner and ended in a smooth, sloping face of rock. There was no sign of crevice or hiding place here. Across the sloping face of the rock there ran a ledge about a foot wide some five or six feet above the floor, and the roof of the gallery at this point ascended steeply to a narrow and longish peak. "No risk of bumping our heads here," said Will, as he flung the torchlight along the ledge and showed its emptiness. "Better hurry back and try if we can't finish the main cave before the light goes out," said Reube, pointing to the pine sliver, already more than half consumed. Shielding the flame with his hand to make it burn more slowly, Will led the way with quick steps back to the larger gallery. This now became more interesting. Its walls were strewn with most suggestive-looking pockets, so to speak, full of silt and oozy _debris_, into which Will and Reube plunged their hands hastily, expecting to find a coin or a silver candlestick in every one. So fascinated were they by this task that they paid no heed to the torch till it burned down and scorched Will's fingers. He gave a startled cry, but had presence of mind enough not to drop it. To make it last a little longer he stuck it on the point of his knife and then exclaimed, in a tone of disappointment: "Reube, we must get out of this while the light lasts—and that'll have to be pretty quick!" "Rather!" assented Reube. "Hark!" The word was barely out of his mouth before the two lads were running for the cave mouth, their heads bent low, their hearts beating wildly. The sound which they had caught was a hollow wash of waves. In a few seconds the torch went out, but there was a pale, glimmering light before them, enough to guide their feet. This puzzled them by its peculiar tone, but in half a minute more they understood. It came filtering through the tawny tide which they found seething into the cave's mouth and filling it to the very top. Will gave a gasp of horror, and Reube leaned in silent despair against the wall of the passage. "The tide will fill this cave to the very top, I believe," said he. "Yes," answered Will, in a voice of fixed resolve; "there's nothing for it but to try a long dive right out through the mouth and into the rocks. We may get through, and it's our only chance!" "Go on, then, Will. Hurry, before it's too late! And—have an eye to mother, won't you?" Here a sob came into Reube's voice. "You know I'm a poor swimmer and no diver. Good-bye!" and he held out his hand. But Will was coolly putting on his coat again. "I forgot that," said he, simply. "Well, we'll find some other way, dear old man. Bring along your matches;" and he turned back toward the depths of the cave. For answer Reube merely gripped his arm with a strong pressure and stepped ahead with a lighted match. He could not urge Will to carry out the plan just proposed because in his heart, for all his confidence in Will's powers as a swimmer, he could not believe it feasible. He saw, in imagination, his comrade's battered body washing helplessly among the weedy and foaming rocks; while in the cave, for all the horror of it, there would certainly be some hours of respite—and who could say what they might not devise in all that time? He had a marvelous faith in Will's resources. In grim silence, and husbanding every match with jealous care, they explored the main cave to its end. Its end was a horrid, round, wet hole, a few feet deep, and not large enough to admit them side by side. They looked each other fairly in the eyes for the first time since that one glance when they had learned that they were entrapped. Reube's eyes were stern, enduring—the eyes of one who had known life long. The boy had all gone out of them. Will's eyes looked simply quiet and kind, but his mouth was set and his lips were white. "This is just a rat hole, Reube," said he. "We won't stay here anyway. Seems to me it would be better to have room to stand up and meet it like a man." "Yes," replied Reube, his voice choking with a sort of exaltation at his comrade's courage; "we'll go back to the little gallery with the high roof. We'll get up on that ledge and we'll fight it out with the water to the last gasp, eh? It's pretty tough—especially for mother!" "Well," said Will, with a queer, low tone of cheerfulness which seemed to his friend to mean more than cries and tears, "when I think of mother and Ted it sort of comes over me that I'd like to say my prayers—eh?" and for a minute or two, standing shoulder to shoulder, he and Reube leaned their faces silently against the oozy rock in the darkness. Then, lighting another match, they made all haste possible back to the side gallery, ascended it, and climbed upon the ledge. Hardly had they got there when they heard the tide whispering stealthily about the entrance of the passage. They felt that it was marking them down in their new retreat. When the next match blazed up—for they could not long stand the darkness with that creeping whisper in their ears—Will gazed steadily at the peak of the roof above his head. The match went out. "Another!" he cried, in a voice that trembled with hope. "What is it?" asked Reube, eagerly. "Roots!" shouted Will, leaping to his feet. "Tree roots coming through the roof up there! We must be near the surface, and there is evidently a fissure in the rock filled up with earth. We'll dig our way out with our knives and our fingers yet!" "But there are no trees on the Point," urged Reube, doubtfully. "Thunder, Reube! but can't there be old roots in the soil?" cried Will, impatiently. "Dig, man, dig!" And he began clawing fiercely at the earth above his head. Reube aided him with fervent energy, and the earth, though hard and clayey, came down about them in a shower. Presently they could reach no farther up. "We must cut footholds in this rock," said Will. The rock was plaster, but hard, and this took time. When it was accomplished they again burrowed rapidly toward the surface and air and light. They were working in the dark now, because with the rise of tide in the cave the air was growing close and suffocating. Three times they had to cut new footholds in the rock. They toiled in silence, hearing only each other's labored breath and the falling of earth into the water beneath them. The tide was now crawling over the ledge where they had first taken refuge. There it stopped; but this they did not heed. The fear of suffocation was now upon them, blotting out the fear of drowning. Their eyes and ears and nostrils were full of earth. They worked with but a blind half-knowledge of what they were doing. All at once there came a gleam of light, and Reube's hand went through the turf. He clawed at the sod desperately, and a mass of it came down about their heads. It troubled them not. There was the clear, blue sky above them. A sweet wind caressed their faces. They dragged themselves forth and lay at full length on the turf with shut eyes and swelling hearts. CHAPTER VI. The Blue Jar. IT was some minutes before either spoke. All they knew was that they were once more in the air and light. Then, with a start, Reube sat up and looked about him. He looked, of course, for the _Dido_. To his inexpressible relief the cherished craft was there in plain sight, riding safely at her anchor, some fifty yards from shore. And there, farther out, rode the pinkie. Reube blessed his comrade's foresight. "Will, where would the boats be now?" said he, "if you hadn't insisted on anchoring them?" Will sat up and surveyed the situation, thoughtfully clearing the mud from his eyes with little bunches of grass. "It was just as well we anchored them," he assented. "And now that I've got my wind, I think I had better swim out to the _Dido_ and bring her in for you. I feel as if I wanted a bath anyway; don't you?" "I'll be with you in half a minute," said Reube. "But first I want to explore the cave a little more. It seems to me we came away in something of a hurry!" He let himself cautiously down in the hole, feet first. Will stopped his undressing and stared at him in amazement. "Are you crazy?" he cried. "Do come out of that beastly hole! The idea of it makes me quite ill!" "O, I'm not going far," said Reube, "and I won't be gone long, either. Don't be alarmed." As his head disappeared Will ran to the hole and looked down, anxiously and curiously. He saw Reube groping in a crevice filled with soft earth, about three feet below the surface. "What in the world are you after, Reube?" he inquired. "That!" replied Reube the next instant, holding aloft triumphantly a small blue jar of earthenware. "Take it, and give me a lift out of this!" Will deposited the old jar reverentially on the turf, and turned to help Reube up. He half expected that the jar would vanish while his back was toward it; but no, there it was, plain and palpable enough. It had a cover set into the rim, and sealed around the edges with melted rosin; and it was heavy. Thrilling with suppressed excitement, Reube and Will sat down with the jar between them, and Reube proceeded to chip away the rosin with his knife. Will gazed at the operation intently. "Probably some good old Evangeline's pet jar of apple sauce!" said he. Reube ignored this levity, and chipped away with irritating deliberation. At last off came the cover. As it did so there was a most thrilling jingling within, and the boys leaned forward with such eagerness that their heads bumped violently together. They saw stars, but heeded them not, for in the mouth of the jar they saw the yellow glint of a number of gold coins. "Well, dreams do sometimes come true!" remarked Will. And Reube, spreading out Will's coat, which lay close at hand, emptied upon it the whole contents of the jar. It was coin—all coin! There were a few golden Louis, a number of Spanish pieces, with silver crowns and _livres Tourtnois_, amounting, according to such hasty estimate as the boys could make, to some five or six hundred dollars. [Illustration: It was coin—all coin!] "That'll be three hundred dollars apiece," said Reube, with eyes sparkling; "and I'll be able to take mother to Boston and go to college too!" "Three hundred dollars apiece!" said Will. "Indeed, I don't see what I had to do with it. You found it. You had nerve enough to take notice of it when you were more than three quarters dead. And you went back and got it. I've no earthly claim upon it, old man." Reube set his jaw obstinately. "Will," said he, "we were exploring the cave in partnership. If you had found the stuff, I'd have expected my share. Now, you've got to go shares with me in this, or I give you my word our friendship ends!" "O, don't get on your dignity that way, Reube," said Will. "If I must, why, I suppose I must! And if I can't take a present from you, I don't see whom I could take one from. But I won't take half, because I didn't do half toward getting it, and because you need it enough sight more than I do. A couple of years ago I'd have spoken differently. But I'll divide with you, and as to the proportions, we'll settle that on the way home. Now I'm off for the _Dido_!" And having thrown off his clothes as he talked, he ran down the bank and plunged into the sea. "I'll let you off with one third," shouted Reube after him, as he sat on the bank and watched. "Not one penny less!" "All right," spluttered Will, breasting a white-crested, yellow wave. In a few minutes he was on board the _Dido_. Pulling up the anchor and hoisting the sail, he brought her in beside a jutting plaster rock which formed a natural quay. Then he resumed his clothes, while Reube took his place at the helm. The wind being still down the bay and the tide on the turn, they decided not to attempt the all-night task of beating up against it. It took them, indeed, two tacks to reach the pinkie. Will went aboard the latter craft, leaving Reube in his darling _Dido_. The two boats tacked patiently back and forth, in and out of the wide cove, till they gained the shelter of a little creek under the lea of Wood Point. Here they were secured with anxious care. Then Will and Reube started for home by the road, pricked on to haste by the thought of how their mothers would be worrying, by the sharp demands of their empty stomachs, and by the elating clink of the coins that filled their pockets. When they reached Mrs. Dare's cottage Reube rushed in to relieve his mother's fears, for she had indeed begun to be anxious. Will hurried on toward Frosty Hollow, munching a piece of Mrs. Dare's gingerbread by the way. As he trudged forward cheerfully, he was overtaken by an express wagon bound for "the Corners." The driver offered him a "lift," as the phrase goes about Tantramar. It was none other than Jerry Barnes, the master of the red bull, and the owner of the pinkie which Will and Reube had so boldly appropriated. Will told him the whole story, omitting only the discovery of the jar of coin. He and Reube had agreed to keep their counsel on this point, lest some should envy their good luck and others doubt their story. "I hope," said Will, "you are not put out at our taking the pinkie?" "I hope," grinned Barnes, "you're not put out at old Ramses for bein' so oncivil in the pastur'! But as for the pinkie, of course you did quite right. Only I'll want you chaps to get her back to the creek by to-morrow mornin's tide, as I'm goin' to drift for shad to-morrow night!" "Of course," said Will; "we'll go after her the first thing in the morning. That's just what we planned on." "That there's a smart boat Reube Dare's built. And he's a right smart lad, is Reube," remarked Jerry Barnes. "There's where your head's level," agreed Will, warmly. "And do you know when he's goin' to drift?" asked Barnes. "He won't be quite ready for to-morrow night," said Will. "But we count on getting out the night following." "Well, now, a word in your ear!" went on Barnes, leaning over confidentially. "I've no manner of doubt Mart Gandy cut the _Dido_ loose. And now Reube had better keep his eye on his nets after the boats get away to-morrow night. I shouldn't wonder a mite if Gandy'd try slashing 'em, so as to give Reube an unpleasant surprise when he starts out for the _Dido's_ first fishing." "I say," said Will, "I never thought of that! We'll 'lay' for him, so to speak, and give him a lesson if he tries it on." "A nod's as good as a wink," remarked Jerry Barnes, mysteriously, as he set Will down at Mrs. Carter's door. Mrs. Carter had not been at all anxious. Ever since Will's reclamation of the new marsh she had had an implicit faith in his ability and judgment. She had imagined that he was spending the day with Reube. She rather lost her dignified self-control over Will's story of the adventure in the cave, and she was filled with girlish excitement over the finding of the old blue jar. "Of course, dearest boy," said Mrs. Carter, "you did quite right to want Reuben to take all the treasure, since he alone found it. But where would he have been but for you? Reuben is a fine boy, if his grandfather didn't amount to much. He takes after his mother's family the most. I'm glad he made you take a share of these lovely old coins." "We'll be able to have some sort of a jolly lark on the strength of it when Ted comes home," said Will. "We might take a run to Boston!" suggested his mother. "I want you boys to see the city; I want to see it myself. And I might—Mrs. Dare, you know, might want a friend near her if the operation proves at all serious, which I hope it won't." "You dear, that's just like your thoughtfulness!" cried Will, jumping up and kissing her. And so it was agreed upon, subject, in a measure, to Ted's assent. CHAPTER VII. Mart Gandy Hacks the Shad Net. DURING the next forenoon the _Dido_ and the pinkie were sailed up to their old berths in the creek. That night all the boats went out except the _Dido_, fading like ghosts into the misty, half-moonlit dusk. Reube was very indignant at the thought that Gandy might attack his shad net, and vowed, if he caught him at it, to clap him in jail. Mrs. Dare had made the boys take a pair of heavy blankets with them, and, stretched on these, they lay along the seat in the _Dido's_ stern, just under the shelter of the gunwale. The reel, with its dark burden of net, rose a few feet away, and stood out black but vague against the paler sky. Close at hand lay the wharf, like a crouching antediluvian monster, with its fore paws plunged into the tide. From where they lay our watchers commanded a view of the surrounding levels by merely lifting their heads. In low but eager tones they discussed the Boston trip planned for the coming autumn, and Reube squeezed his comrade's hand gratefully when he heard what company he and his mother would have. "I can never tell your mother my gratitude," said he. "With her there my anxiety will be more than half gone." "I'm so glad muzz thought of it!" said Will. "I'm sure it would never have entered my heedless head. And yet it is just the thing for us to do." Another subject of their excited colloquy was the disposal of those old coins. If deposited at the Barchester Bank they would certainly arouse comment and set all sorts of romantic stories going. But presently Will thought of his friend Mr. Hand, to whom all things in the way of financial management seemed possible. It was decided that on the very next day Will should take the whole store to him and get him to send it away for conversion into modern currency. "And he'll be able to see that we don't get cheated," added Will. "I fancy some of those coins will be wanted by collectors, and so be worth a lot more than their face value." "I tell you, Will," exclaimed Reube, "I can't even yet quite get over my astonishment at the way you swear by old Hand; or, perhaps I should rather say, at the way the old fellow seems to be developing qualities of which he was never suspected until you begun to thaw him out." "Indeed," said Will, warmly, "Mr. Hand is fine stuff. He was like a piece of gold hidden in a mass of very refractory ore. But Toddles melted him down all right." In a short time conversation flagged, and then, listening to the lip-lip-lipping of the softly falling tide and the mellow far-off roar of the waters pouring through an _aboideau_, both the watchers grew drowsy. At last Will was asleep. Even Reube's brain was getting entangled with confused and fleeting visions when he was brought sharply to himself by the queer sucking sound of footsteps in the mud. He raised his head and peered over the gunwale. There was Mart Gandy within ten paces of the net reel. He had come by way of the dike. In his hand gleamed the polished curve of the sickle with which he was accustomed to reap his buckwheat, and Reube's blood boiled at the thought of that long, keen blade working havoc in the meshes of his cherished nets. Gandy marched straight up to the reel, raised the sickle, and slashed viciously at the mass of woven twine. Ere he could repeat the stroke a yell of wrath rang in his ear and Reube was upon him, hurling him to the ground. His deadly weapon flew from his grasp, and he was too startled to make much resistance. The weight of Reube's knee on his chest, the clutch of Reube's strong fingers at his throat, took all the fight out of him. He looked up with angry and frightened eyes and saw Will standing by, a meaning smile on his lips and a heavy tarred rope's end in his hand. Reube rubbed the culprit's head rudely in the mud, and then relaxed the grip upon his gasping throat. "I cannot pound the scoundrel now that I've got him down," said he, turning his face toward Will. "What shall we do with him? You can't lather a chap that doesn't resist and that has his head down in the mud. It's brutal!" "We'll tie his hands to the reel and give him a taste of this rope's end," suggested Will, judiciously. "I don't exactly like that either," said Reube, rubbing his captive's head again in the slime. "It's too much like playing hangman. He deserves the cat-o'-nine-tails if ever a scoundrel did, but I don't like the dirty work of applying it. We'd better just take him to jail. Then he'll get a term in the penitentiary, and be out of the way for a few years. Fetch me that cod line out of the cuddy, will you?" By this time Mart Gandy had found his voice. That word "penitentiary" had reduced him to an abject state of terror, and he began to plead piteously for mercy. "Lick me! Lick me all you like!" he cried, in his queer, high voice. "I kin take a hidin'; but don't send me to the penitentiary! What'd the old man do, as hain't got his right senses no more? An' the old woman'd jest plumb starve, for the gals they ain't a mite o' good to work. Le' me off this time, Reube Dare, 'n' I declare I won't never do it ag'in!" Mart's imploring voice more than his words made Reube weaken in his purpose. As for Mart's promise, he put no faith in that, and marked on Will's face an unrelenting grin. Nevertheless he said: "There's something in what the rascal says, Will. If Mart goes to the penitentiary his family's going to suffer more than he. I've a mind to let him off this time, after all." "Well," grunted Will, "just as you say. But it would be nothing short of iniquitous to let him off altogether. You'd better give him a good ducking, to let him know you're in earnest, anyway." Reube pondered this a moment. "Mart Gandy," he said, sternly, "I'm going to let you off this time with nothing more than a ducking, to fix the circumstance in your mind. But remember, if I find you again at any of your old pranks I'll have a warrant out against you that very day! And I've got all the evidence needed to convict you. Now get up!" And he jerked the lanky and bedraggled form to its feet. Mart, with the fear of prison walls no longer chilling his heart, had recovered himself during this harangue, and his eyes gleamed with a furtive, half-wild hate. Still he made no resistance. The sickle lay far beyond his reach, and he knew he was physically no match for either Reube or Will. He was led to the very edge of the steep, slippery incline of the channel, wherein the tide had dropped about fifteen feet. Will snatched a coil of rope out of the boat. "Can you swim?" he asked, curtly. "No," said the fellow, eyeing him sidewise. "He is lying," remarked Reube, in a businesslike voice. "Well," said Will, "if he isn't lying we'll fish him out again, that's all." Just as he was speaking, and while Gandy's eyes were fixed upon his face with an evil light in them, Reube stepped forward and executed a certain dexterous trip of which he was master. Gandy's heels flew out over the brink, his head went back, and, feet foremost, he shot like lightning down the <DW72> and into the stream. In a moment he came to the surface and began floundering and struggling like a drowning man. "He's putting that all on," said Reube. "Maybe not," exclaimed Will. "Better throw him the end of the rope now." Reube smiled, gravely, but obeyed and a coil fell almost in Gandy's arms. The struggling man seemed too bewildered to catch it. He grasped at it wildly, sank, rose, sank, and rose again. Will prepared to jump in and rescue him. But Reube interposed. "No, you don't," said he, coolly; "not without one end of this rope round your waist and me hanging onto the other end!" "Make haste, then," cried Will, in some anxiety. In a few seconds the rope was knotted firmly about Will's waist, and he sprang into the water. Even as he did so the apparently drowning man disappeared. He came up again many feet away, and, swimming with wonderful speed, gained the opposite bank. He clambered nimbly up the <DW72> and started at a run across the marsh. Reube, with derisive compliments, helped the dripping and disgusted Will to shore again. "I saw his game," said he, while Will wrung out his clothes. "He's just like a fish in the water, and he thought he'd make believe he was drowning, and so manage to drag you down without getting blamed for it. But he knew the game was up when he heard what I said and saw you had the rope tied to you." "Right you are this time, old man," said Will. The sky had cleared perfectly, and in the radiant moonlight Reube's skillful fingers quickly mended the net. The cut was not a deep one, as the blade had been stopped by two of the large wooden floats with which the net was beaded. The mending done and the net made ready for the next night's fishing, the boys turned their faces toward the uplands to seek a few hours' sleep at Mrs. Dare's. Meanwhile Mart Gandy had never ceased running till he got behind an old barn which hid him from the scene of his punishment. Then he turned and shook his long, dark finger in silent fury toward the spot where his antagonists were working. When he reached home he crept to a loft in the shed and drew out a long, heavy musket, once a flintlock, which he had altered to a percussion lock, so that it made an effective weapon for duck shooting. This gun he loaded with a heavy charge of powder and a liberal proportion of buckshot. He muttered over his task till it was done to his satisfaction, and then stole off to sleep in the barn. CHAPTER VIII. A Midnight Visitor. REUBE and Will did not go shad fishing the next night, after all. A fierce sou'wester blew up toward evening, and drifting for shad was out of the question. Every boat was made secure with extra care, and all night the fury of an unusually high tide put the Tantramar and Westcock dikes to the test. They stood the trial nobly, for well had their builders done their work. The Dares' wide-winged cottage, set in a hollow of the hill, was little jarred by the gusts that volleyed down upon it. Having seen the _Dido_ well secured behind the little wharf, Reube felt altogether at ease. "Are you quite sure," asked Mrs. Dare that evening, "that Gandy won't make another attack on the shad boat or the net?" "O yes, mother," answered Reube; "I'm no longer anxious on that score. Mart feels madder than ever, I've no doubt, and I think he'd have tried to drown Will last night if I had left him half a chance. But he is just mortally afraid of the penitentiary, and, now he knows we can prove a case against him, I imagine he'll bottle his wrath for a while." "Well, dear, I hope you are right," said his mother. "But I must say I think Mart Gandy is more dangerous than you give him credit for being. I want you to be very careful how you go about alone at night. I know that blood, and how it craves for vengeance. Be watchful, Reube, and don't make the mistake of undervaluing your enemy." "No, mother, I won't," answered Reube. "I know that wise head of yours is generally in the right. If you think I ought to keep my weather eye open, why, open I will keep it, I promise you. And now it's my turn! What were you doing out so late alone, when it was almost dark, with those poor eyes that can't see much even in broad daylight?" "I know it was imprudent, Reube, and I did have some trouble getting home," confessed Mrs. Dare. "But, dear, I couldn't help it. I heard quite late in the afternoon that Jim Paul was on a spree again, after keeping steady for a whole year. He has been drinking hard for a week—drunk all the time—and his wife sick in bed, and nothing to eat in the house. I went right down with a basket, and I was glad I went. The children were crying with hunger. And such a house! And Mrs. Paul lying on the floor, white as a ghost, where she had just fallen! She had got out of bed and tried to make some porridge for the children—there was nothing in the house but a little corn meal. Her husband was out, and she was trembling with fear lest he should return in a drunken frenzy and beat them all. Poor woman! And Jim Paul is a good husband and father when he is sober. You see, Reube, it took me a long while, blind as I'm getting, to find the children and straighten things up." "Well, mother, this autumn, if all goes well," said Reube, cheerfully, "we'll get the poor eyes fixed as good as new. And then you may stay out late sometimes without me scolding you." That night, when Reube and his mother were sleeping soundly, they were roused by a crash which the roaring of the wind could not drown. It seemed to shake the whole house. Reube sprang out of bed. As he dragged on his trousers his mother came to the door with a lamp in her hand. "What is it, mother?" he asked, rubbing his eyes. "Some one has broken in the outer door," replied Mrs. Dare, calmly. "He is in the back kitchen now, but the inner door is bolted." Reube took the lamp from her hand and started down stairs. "O, my boy, what are you doing? You have no weapon. O, if only we had—" But Reube interrupted these words, which now had an all-unwonted tremor in them. "Nothing else to be done, mother," he said, quietly. "Don't be scared! He won't bother me, whoever he is!" And as his mother looked at him she felt strangely reassured. Or, perhaps it was something in his voice which satisfied her. She snatched up her big Paisley shawl, flung it over her nightgown, and followed Reube at a discreet distance. Reube opened a door leading from the hall to the inner kitchen. At the same moment the door between the two kitchens was battered in with a loud crash, and there entered a terrifying apparition. It was Jim Paul, drunk, and with a wild glitter in his bloodshot eyes. His face and huge, burly form were stained with the blood of various fights, and he carried in his hand the ax with which he had broken down the doors. Jim Paul's appearance was well calculated to daunt an older heart than Reube's, but Reube's heart was of a dauntless fiber. A cold, steady light seemed to shine from his pale eyes as they met the fierce and feverish gaze of the intruder, who promptly stopped and glanced aside uneasily. Reube's mouth and broad brow, usually so boyish, looked as grim as iron as he stepped up coolly to the drunken giant and asked him what he meant by breaking into the house. Paul hesitated, beginning to quail before the stronger will that confronted him. "Give me that ax!" said Reube, quietly. Paul handed over the weapon with most prompt and deferential obedience, and began to stammer an inarticulate apology. Reube kept eyeing him without another word, and Paul grew anxious and worried under the gaze. At last he plunged his great hand deep down into his trousers pocket and drew forth a lot of silver and copper coins. These he pressed Reube to accept, presently breaking into maudlin protestations of esteem. Reube turned away abruptly, having made up his mind what to do with his troublesome guest. He set the lamp on a shelf, and then took the money which Paul still held out. "I'll take care of it till you're sober enough to put it to its proper use," said he. The big fellow was by this time on the verge of tears, and ejaculating a host of promises. He wouldn't touch another drop, and he'd mend both the doors so they'd be just as good as new; and he'd never forget Reube's goodness in not having him taken up for a burglar, and he'd go right home to his poor family. "No you don't, Jim!" interrupted Reube at this point. "You'll stay right here where I put you for the rest of this night. And you'll go home to your family in the morning if you're sober enough, but not otherwise." At this Paul began to protest. But paying no more heed to his words than if he had been a naughty child, Reube led him to a small room opening off the kitchen. The window of this room was a tiny affair through which a man of Paul's bulk could not manage to squeeze. Reube got a couple of heavy buffalo robes, spread them on the floor, and told Paul to lie down on them. Then, bidding him sleep soundly and feel better in the morning, Reube locked him in and went to bed. But he took the precaution to carry the ax up stairs with him. His mother said simply: "You managed the poor fellow beautifully, my dear boy. I was glad you were not forced to be rough with him." Reube smiled inwardly at his mother's magnificent faith in his powers, but all he said was: "Good night, mother dear. He's all right where he is now, and I'll have a talk with him in the morning." In the morning Paul had fairly sobered up. He was genuinely ashamed of himself. After making him eat some breakfast Reube gave him back his money and sent him home. As he was leaving the house he turned to say something, but seeing Mrs. Dare within earshot he hesitated. Reube followed him to the gate. There he stopped and said: "I know I was just crazy drunk las' night, but I kinder reck'lect what happened. When we wuz all drinkin' down to Simes's, an' I'd licked three or four of the fellers, Mart Gandy says, says he, 'There's a lad hereabouts as yer cain't lick, Jim Paul, an' him only a kid, too!' In course I fires up, and says I, 'Show him to me, an' I'll show yous all!' Some more words passed, till I was that riled I was blind, an' then Mart Gandy says, says he, 'Yer cain't lick Reube Dare!' Off I started to once't, an' you know's well's I do that I'd never 'a' lifted a finger agin this house ef I hadn't bin jest blind crazy! But I'll remember what I might 'a' done ef you hadn't jest bin able to make me mind; an' 'fore God, I'll try to keep straight. But you mark my words. Look out fer that ther Gandy! He's up ter mischief, an' he ain't the one to stick at anything." "Thank you, Jim," answered Reube, holding out his hand. "We'll say no more about last night, but I'll remember your warning, and I want you to remember the promise you've just made me!" CHAPTER IX. The Dido's First Fishing Trip. JIM PAUL'S warning made an impression on Reube's mind. When Will Carter heard of it he exclaimed: "That fits in with my own ideas exactly, Reube! There's some alien streak in that Gandy's blood that makes him more likely to knife you in the back than fight you to your face; and that being a kind of enemy you don't understand, you've got to be all the more careful, old man." "Well," said Reube, thoughtfully, "what is one to do about it anyway?" "Why, look sharp for a chance to get the scoundrel locked up, even if his family does need him," answered Will. "And, meanwhile, keep your eyes open after dark, and take no chances. Carry a good heavy stick, too." "All right!" laughed Reube. "But I think these hands of mine are good enough for Mart, any day." That night proving fine with a fair, light wind down the bay, Reube and Will took the _Dido_ out for her first drift. In the cuddy were stowed some extra clothes in case of a cold bay fog rolling up, and several thick blankets, and enough bread and meat and cold tea for a couple of days in case the trip should be unexpectedly prolonged. Will insisted also on a generous sheet of Mrs. Dare's gingerbread and a brown stone jug of lime-juice ready mixed. He had a care for material comforts. But as for Reube, he was in such a state of exalted excitement that he could think of nothing but shad and the _Dido_. Will was an excellent shot—famous, indeed, all about that region for his habit of going partridge shooting with a little rifle instead of the orthodox shotgun. He now took his beloved little rifle with him in the hope of bagging some rare specimen of gull or hawk. He little dreamed that he might turn out to be hunted instead of hunter on that trip. By the time all preparations were complete, and the brown nets, beaded with wooden floats and leaden sinkers, unwound from the reel and neatly coiled in the _Dido's_ stern, and the great half hogshead amidships filled with water to serve as ballast, the rest of the shad fleet were dropping one by one out of the creek. Like great pale moths their sails floated over the marsh, following the windings of the creek, and vanishing into the silvery night. The _Dido_ followed with Reube at the helm. She sailed swiftly and soon overtook her slower rivals. Only the little red-and-white pinkie preserved her distance, and Reube had to acknowledge, reluctantly, that she was as speedy as the _Dido_. When the fleet reached the open every boat headed down the bay, at the same time diverging from its neighbor. The object of this latter movement was to get the utmost possible room for the nets; of the former to get as far down the bay as possible before turning with the tide to drift back. The fishing was all done on this backward drift. The _Dido_ gradually lost sight of all her rivals but the pinkie, which hovered, a faint white speck, far to starboard. The five hours' sail brought our young shad fishers past Cape Chignecto, and into wider waters. It was rough off the cape after the turn of tide, and the _Dido_ pitched heavily in the steep yellow waves. Neither Reube nor Will had ever before been so far down the bay, and in their curiosity over a certain strange formation of the cliffs they sailed somewhat close to the shore. Will, from his place on the cuddy, was expatiating learnedly on the distorted strata before them, when suddenly he broke off in the midst of a word, and yelled: "A reef right ahead! Bring her about, quick!" But Reube had seen the danger at the same instant. With one hand he jammed the helm hard down, and with the other loosed the main sheet, at the same time shouting to Will: "Let go the jib!" Will sprang to obey. But the stiff new rope, pulled taut during the long run and shrunken hard by the spray, would not yield at once even to his strong fingers. It had got jammed fast in some way. Meanwhile the _Dido_, broadside on and beaten mightily by the waves, was heeling as if she would turn over in the trough. The jib pulled terrifically, and the water hissed above the cleaving gunwale. "Quick! Quick!" yelled Reube; and Will, snatching his knife from his belt, severed the rope at a slash and released the sail. Gracefully the _Dido_ swung up, righted herself, and bowed on an even keel. "That was something of a close shave," remarked Reube. "It was," said Will, studying with angry eyes the rope which had baffled him. After this they took a long tack which brought them once more into smoother waters above the cape. As the sun got higher the wind fell lighter, and at length Reube announced that it was time to get out the net. The mainsail was hauled down, and under a close-reefed jib the _Dido_ lay to while the net was slowly and carefully paid out over the stern. The helm was so delicately manipulated that the floating net was not allowed to bunch, but formed its line of blocks into a wide, shallow crescent with the _Dido_ at one horn. This accomplished, the remaining bit of canvas was furled and the long, slow process of "drifting" was fairly begun. The tide ran fast, and the shores a half mile distant slipped smoothly by. The rudder swung loose while Will and Reube ate their breakfast, and congratulated themselves on the sailing qualities of the _Dido_. After breakfast they basked in the sweet June sun, told stories, wondered idly if the net was capturing anything, grew sleepy, and at last began to get impatient. A great gray gull flew over, and Will raised his rifle. But he lowered it instantly. "I was on the point of dropping that poor old grayback," said he, penitently, "just for lack of something better to do." "I wondered why you were going to shoot it," said Reube, "when I knew it was no good as a specimen." "I say," exclaimed Will, a few minutes later, yawning, "this sun's getting mighty hot! How long have we been drifting?" "A little over two hours," replied Reube. "How long is one expected to drift?" asked Will. "O, say four, or maybe five," was the reply. "Well, as this is just a sort of trial trip and picnic," suggested Will, "I move we haul in the net and count our fish. Then we can sail round yonder point to a big creek I know of with a fine, shelving sand spit at its mouth. The sand is covered at high water; but about the time we get there it will be just right for you to go in swimming from. A swim will go fine this hot day, eh?" "All right!" assented Reube. He was himself consumed with impatience to see what was in the net. As the first two oars' lengths came over the side there was nothing, and the fishermen's faces fell. Then came the shining, silvery sides of a dozen shad, and they grew exultant. Then a small salmon, and they chuckled. Then two or three large jellyfish slipped through the meshes in fragments. And then the shad really began. It was a noble haul, and excitement ran high in the _Dido_. The huge tub amidships was nearly half full of the gleaming spoils by the time the last fathom of net came over the side; and there was also another and larger salmon to show. The water in the tub was thrown overboard, as the shad made sufficient ballast. "If the _Dido_ keeps it up like this she'll be as good as your diked marsh," cried Reube, gloating over his prizes. "Right you are!" said Will, heartily, washing his hands with vigor over the side. "And now for that swim. We've earned it, and we need it." Forthwith the sails were got up, and the _Dido_ made all haste for the swimming place which Will had indicated. She rounded the point, skirted the shore for nearly a mile, ran into the creek's mouth, and dropped anchor beside the tempting yellow sand spit. [Illustration: Then came the shining, silvery sides of a dozen shad.] CHAPTER X. Besieged on the Sand Spit. WILL lost no time in getting off his clothes. He felt hot and fishy, and the cool, tawny ripples allured him. Reube tested the anchor to see that the _Dido_ held fast, and then began more slowly to undress. The anchor had been dropped not more than thirty or forty feet from the sand spit, but the boat had swung off before the light breeze till the distance was increased to a score of yards. "That's quite a swim for me, Will," said Reube, doubtfully, eyeing the tide. "Nonsense! You can swim twice as far as that if you only think so," asserted Will with confidence. "By the way, I wonder what makes you such a duffer in the water. That's your weak point. I must take you in hand and make a water dog of you." "I just wish you would," said Reube. "I don't seem to really get hold of myself in the water. I have to work frightfully hard to keep up at all, and then I'm all out of breath in less than no time. Why is it, I wonder?" "Well," answered Will, "we'll see right now. You swim over to the bar yonder, and I'll stand here and watch your action. I fancy you don't use your legs just right." "It's too far. Pull her in a little way," urged Reube. Laughingly Will complied. He pulled on the rope till the _Dido_ was almost straight above the anchor. Then Reube slipped overboard with an awkward splash and struck out for the sand spit. His progress was slow and labored. His strokes made a great turmoil, but produced little solid result. Will's face wore a look of amused comprehension, but he refrained from criticism till the swimmer had reached his goal and drawn himself out panting on the sand. "How's that?" asked Reube. "O, it's all wrong! If it was anyone less obstinate than you he wouldn't keep afloat half a minute struggling that way," answered Will. "But wait a moment and I'll show you what I mean." With a graceful curve Will plunged into the water as smoothly as if he had been oiled. A few long, powerful strokes brought him to the spot where his comrade was standing. "Now," said he, "get in there in front of me when the water comes up to the lower part of your chest. You use your legs wrong, and your arms too. Your arms don't make a quarter the stroke they ought to, and your fingers are wide open, and your hands press out instead of down on the water too much. Keep your fingers together, and turn your palms so that they tend to lift you, instead of just pushing the water away on each side. And, moreover, finish your stroke!" "And what about my legs?" asked Reube, humbly. "Never mind them till we get the hands right," insisted Will. "Now lean forward slowly, with your back hollowed well and chin up, your arms out straight ahead, and straighten your legs. Right! Now round with your arms in a big, fine sweep, drawing up your legs at the same time. That's more like it. But your legs—you draw them up right under you with the knees close together. That's all wrong. Didn't you ever watch a frog, old man? As you draw up your legs spread your knees wide apart like one of those tin monkeys shinning up a stick. Try again. M-m-m! Yes, that's something like what I want. You see, with the knees doubled up wide apart they have their separate motions as you kick them out again. The legs press the water down, and so do some lifting. The feet push you ahead, and at the same time you thrust a wedge of water backward from between your legs as they come strongly together." "That's reasonable," assented Reube, practicing diligently. In a few minutes he had made a marvelous advance in his method. Will sometimes swam beside him, sometimes stood on the bar and criticised. All at once, in the midst of an encouraging speech he clapped his hands to his heart with a cry of pain, sank upon the sand, and called out sharply: "Come here quick, quick, Reube!" Reube remembered his lessons even in his anxiety, and with long, powerful strokes made his way swiftly to Will's side. As he landed Will straightened himself up with a grave smile, and held one his hand to draw Reube back from the water's edge. "I'm all right now," said he. "But what was the matter?" queried Reube, in impatient astonishment. "Why, just that," replied Will, suddenly pointing to the water. Reube turned and glanced behind him. "_Sharks!_" he almost shouted. And there, sure enough, were two black triangular fins cleaving the water where he had just been swimming. After staring for a moment or two in silence he turned again and met the inscrutable smile on his companion's face. He held out his hand. "I understand," said he. "If I'd got flurried in the water I would have forgotten the lessons you have just given me, and couldn't have got to shore fast enough." And in the love and admiration which glowed in his eyes Will read sufficient thanks. "Now the question is," mused the latter, "how we're going to get to the boat." "Seems to me we'd better stay right here for the present," said Reube, drily. "Yes," suggested Will; "and when the tide gets a little higher what then?" "Um!" said Reube, "I was forgetting this is not an honest island. This does certainly look awkward. But what do you suppose those chaps are doing, cruising to and fro right there? Are they just catching herring? Or are they after us?" "You would know what they were after if you had seen the way they streaked in here when they got a glimpse of you," responded Will. "I don't see what we're going to do about it," said Reube presently, after they had gazed at their dreadful besiegers in gloomy silence. "But there's something in the way of a weapon which we might as well secure anyway." And running to the other side of the sand spit he snatched up a broken picket which had been left there by the previous ebb. "It's better than nothing," he insisted. "Reube," said Will, "if we stay here it's all up with us pretty soon. We'll just make a dinner for those chaps. It seems to me I'd better take that stick you've got there and make a dash for the _Dido_. You know I swim wonderfully fast, and dive like a fish; and I can perhaps manage to jab the sharks with that picket, or scare them off by making a great splash in the water. If I succeed in getting to the _Dido_ I'll bring her over for you, and we'll fix the enemy with a couple of bullets." "No," said Reube, doggedly, grasping the other firmly by the shoulder. "You just wait here. We'll fight this thing out side by side, as we have fought things out before. Remember the cave, Will! And we won't fight till we have to. We're safe for a half hour yet anyway." "And then the distance between us and the boat will be all the greater," urged Will. "No, the wind's falling and it may turn and blow the _Dido_ over this way," insisted Reube. "See, the fitful little gusts now. Or one of the other boats may come in sight near enough for us to hail her. You never can tell what may happen, you know." Indeed, as a matter of fact, Reube was right. He could not tell what would happen. What actually did happen was neither of the things which he had suggested, and yet it was the most natural thing in the world. CHAPTER XI. Foiling the Sharks. SLOWLY the tide crept in upon the spit, and the strip of sand grew narrower. Those grimly patrolling black fins drew nearer and nearer as the bar became smaller. The gusts of wind grew more and more capricious, sometimes seeming as if they would actually swing the _Dido_ over to the rescue of the despairing prisoners; but this they refrained from doing. "She'll swing over to us yet," asserted Reube, confidently. "She isn't going to desert us in such a horrible scrape as this!" But Will made no reply. He was studying his tactics for the struggle which he felt was now close at hand. "You'd better give that stake, or picket, or whatever it is, to me, Reube," he suggested. "You'll have enough to do just swimming. I, being perfectly at home in the water, will be able to make the best use of it, don't you think? If I can manage to give each of those brutes a solid jab in the belly, maybe they'll get sick of their undertaking and depart." "All right," agreed Reube, though with some reluctance. And he handed over the sharp stick. "You'll have to fight for yourself and me too, that's all," he continued. "I'll make a fight anyway," said Will. "And I dare say I can drive them both off. In these well-stocked waters they can't be very hungry or very fierce." At last the strip of sand was not more than three or four feet wide and six inches above water. But though so narrow it was more than a hundred yards in length, extending like a sort of backbone up the entrance to the creek. About the middle it looked a foot or two broader than where the captives were standing. "Come up there where it is wider," said Reube. As they went those black fins kept scrupulously abreast of them, and they shuddered at the sight. At this point the opposite shore of the creek jutted out somewhat sharply toward the sand spit. Will cast his eye across the narrow channel. "What fools we are all this time!" he cried. "Why, we can easily swim across to land on this side before the sharks can get all the way around the shoal." "Can we?" inquired Reube, doubtfully. "Yes," said Will, "and the sooner the better. But now look, Reube; keep cool. Don't try to hurry too much. Take the long, slow strokes. And remember, I'll keep behind, and, if the brutes do get around too quick I'll keep them busy a minute or two, never fear. Then you can come to my rescue with one of those fence stakes yonder. Come on, now!" And side by side they slipped swiftly into the water. With long, powerful strokes they sped across the narrow channel that divided them from safety. Will, swimming at much less than his full speed, dropped almost a yard behind as soon as they were fairly started, and swam on his side so as to command a view of the water behind. The narrow ridge of yet uncovered sand, however, prevented him from seeing what took place when he and Reube slipped noiselessly, as they thought, into the water. Those black fins had turned on the instant, and were darting with terrific speed for the lower end of the sand spit. [Illustration: "I think we'll make it," he said to himself.] By the time our swimmers were fairly half way across, or perhaps a shade better, Will saw the fins come round the foot of the sand spit. "I think we'll make it," he said to himself, measuring the distance with cool eye. But he refrained from telling Reube what he saw. A moment later, however, as he marked the terrible speed of the approaching peril, he could not help saying, in a voice which he kept quite steady and casual: "You're doing finely, Reube. Don't hurry your stroke, but put a little more power in it for a spurt and we're safe." Reube wasted no breath for a reply. He knew this adjuration of Will's meant that the danger was drawing very near; but his companion's anxiety as to his nerves was quite unneeded. He struck out as steadily as ever, but with all the force which his muscle and his will power together could create, and went ahead so fast that Will had to really swim to keep up with him. In half a minute more—to them it seemed a long time—Reube struck bottom in shallow water and dragged himself to land. The sharks were now so near that for an instant Will hesitated. Would he have time to get out, or must he turn and defend his legs? But his decision was instantaneous. With a mighty thrust of his legs and one free arm he flung himself forward, felt the mud beneath his hands, jerked his feet under him, and stood up just in time to turn and deal the nearest shark a desperate blow with the pointed stake as it half turned over to seize him. Astonished and daunted, the great fish recoiled, and before its fellow could join in the attack Will had sprung out of reach. "It's a blessed thing," said Will, "to get ashore with a whole leg, isn't it?" His light manner was but the froth on the surface of his deeper emotions. He was trembling from the long strain and stern self-repression. Reube drew a deep, slow breath. "Verily," said he, with a grave face, "that was pretty nearly as bad as the cave while it lasted!" "O, surely not," objected Will. "We had the free air and sun, and a chance to fight for our lives. But it makes me mad to think what fools we were in the first place." "How so?" asked Reube. "Why," answered Will, "if we'd come, this way on the first arrival of those beastly leviathans we would not have had half so far to swim, and our pursuers would have had nearly twice as far to go. It would have all been as simple and easy as falling off a log, and our hearts wouldn't be going like trip hammers now, the way they are." "That's so," agreed Reube, in a tone of disgust. "But now I'm wondering what other scrapes we can manage to get into between here and home. I never realized till now the truth of the proverb—generally I despise proverbs—which says 'It never rains but it pours!' It seems to me I have been at steady high pressure the last few days, and lived more and felt more than in all the rest of my life put together." "My idea is that fate'll let us alone for a while now," remarked Will, with the air of a philosopher. "The law of probabilities is all against any further excitement on this trip." "So be it!" said Reube. "But let's get to the _Dido_—and our clothes!" Trotting up the lonely shore of the creek for half a mile, they came to an _aboideau_, and crossed to the other shore of the stream. Following down the bank, they soon came opposite the _Dido_. The sharks were nowhere to be seen, and the _Dido_ presently swung so near that a short plunge put them safely on board. Dressing hastily, they got up the anchor and sailed out of the creek with their bowsprit pointing homeward. As they did so the sharks appeared again, pursuing them. Will tied a piece of pork to a dry block, tossed it overboard, and snatched up his rifle. The bait floated a moment unmolested, then the nearest shark, darting upon it, turned over and engulfed it in his murderous mouth. At the same moment Will fired. The ball, with deadly precision, entered the brute's mouth and pierced its brain. With a convulsive flurry it rolled over stone dead. CHAPTER XII. The Shot from the Rocks. THE other shark, taking alarm, darted away at once. "That's a trophy we must secure!" exclaimed Reube. "You don't have a chance to shoot a shark every day." Will was already noosing a couple of ropes. The _Dido_ was brought alongside the rolling carcass, and after a great deal of difficulty the nooses were made fast to its head and tail. In the effort to hoist the heavy mass aboard the boat was nearly swamped; and at one time Will offered to give up the job. But Reube generously insisted on continuing. At last, by waiting till a wave rolled boat and carcass, together in just the most propitious way possible, the thing was accomplished with a sudden hoist. Along with the great fish a barrel or two of water came aboard; and while Reube steered, Will was kept busy for a half hour bailing the boat out. This accomplished, Will discovered that the hot sun, the excitement, or possibly the motion of the boat, had given him a violent headache. "O, it's all very well, but you know you're seasick," gibed Reube, as he sat at the helm. "Maybe so," assented Will, undisturbed at the imputation. "Anyway, I'm going to lie down here under the shade of the mainsail to sleep it off. Even if I snore don't wake me, as you value your life!" With the aid of a blanket he made himself comfortable, and in a few minutes was sound asleep. Steering the _Dido_ and watching the shores slip by, and building plans for the coming year, Reube was well content. The wind, after having almost died away, had shifted a few points and was blowing gently but steadily. With this wind on her beam the _Dido_ sailed fast, heeling smoothly, and sending the waves past her gunwale with a pleasant murmur. Reube took little account of time just now. Life seemed a very attractive dream, and he was unwilling even to stir. But his hand on the tiller was firm, and there was no smallest danger of him dropping to sleep. This lotus-eating mood, with a few intervals, must have lasted four or five hours. The tide had turned and been a good three hours on the ebb. At last he observed vaguely that he was just off the promontory where he and Will had been caught in the cave. Thinking of the dangers of the locality, he steered a point or two further out to give the sunken reefs a wide berth. As he did so he noticed that the tide was out as far as the foot of the bluff, and that the cove flats were all uncovered. He was fairly past the point when out of the tail of his eye he caught a movement among the rocks just where the cave mouth lay. Turning his head quickly, he saw Mart Gandy step forward and raise his great duck gun to his shoulder. The distance was scarcely fifty yards, and Gandy was a first-rate shot. There was no time to think. Like a flash Reube dropped forward upon the bottom of the boat, letting the tiller swing free. At the same instant there was a loud, roaring report from the big duck gun, and the heavy charge of buckshot, passing just over the gunwale, tore a black hole in the sail. Reube had fallen just in time. He picked himself up again at once, recaptured the tiller, and tried to put the _Dido_ before the wind in the hope of getting out of range ere Gandy could load up for another shot. But the boat was pointing straight for the shore, and came round very slowly. Ere Reube could get her on a new course Will appeared from behind the sail, astonished at the noise and the confusion. He took in the situation at once. Gandy, who was reloading in fierce haste, stopped for a moment with paling face at Will's unexpected appearance. He had evidently been under the impression that Reube was alone, or doubtless he would not have committed himself by such an attack. Then he made up his mind that he would see the thing through. Flinging down his powder horn, he rammed home the wadding fiercely, and reached for the heavy shot pouch at his side. "To shore, Reube! Straight ashore with her!" said Will, in a low, intense voice. Reube obeyed instantly, seeing that his former intention had been a mistake. Mart Gandy wadded home the buckshot in his great gun barrel. The charge was a terrific one. Will stooped, like a wild-cat crouching for a spring. The _Dido_ rushed straight on, and both Reube and Will declared afterward that they knew just what it was like to charge a battery. As Will's keen eye saw Gandy's finger feel for the trigger, he yelled, "Down! Reube!" and dropped beneath the gunwale. On the instant Reube fell flat in the stern. The great roar of the duck gun shook the air at the same moment. But the charge flew wild and high, and a black hole appeared in the upper part of the sail. The report was followed by a yell of pain, and the big gun clattered on the rocks. Gandy staggered back. The breech of the gun had blown out, and a fragment of it had shattered his arm. In a moment, however, he recovered himself and rushed desperately at the face of the bluff. The boys saw at once what had happened. "We've got him now," said Reube, sternly. His sense of justice quenched all sense of pity. "Yes," remarked Will, "he can't climb the rocks with that arm; and now that he can't fire that clumsy weapon of his, he's no longer dangerous. We'll just take him prisoner!" Meanwhile the _Dido_ was dashing straight on to the Point, trusting to Providence that she would strike a soft spot. But with Gandy disabled there was no need of this desperate haste, so Reube steered for a place where he knew there was neither reef nor honey pot, but a <DW72> of firm sand. He was too much occupied in the delicate task of making a safe landing for the _Dido_ to observe what Gandy was doing. But Will watched the actions of the latter, with a cold smile on his finely cut mouth. "He is a coward, every time, when it comes to the pinch!" was his remark. "See him now, too scared to meet us like a man, and struggling like a whipped cur to climb those rocks and get away! He can't do it, though!" Indeed, Mart Gandy at this moment realized the fact which gave Will such satisfaction. With his right arm broken, he could not make his way to the top of the bluff. Like a hunted animal, he turned and glared with eyes of hate and fear upon his adversaries. Again he looked at the rocks, turning his head quickly from side to side. And then, with a shrill, fierce cry, he darted out straight across the flats toward the head of the cove. "He'll get away after all," remarked Reube. "Get away, indeed!" muttered Will. "It's in the very thick of the honey pots he'll be in less than half a minute, or I'm much mistaken. There!" As he spoke, Gandy was seen to throw himself violently backward. It was just in time. As he tore himself by a mighty wrench from the engulfing slime he struggled to his feet, swerved to one side, and ran on. Reube drew a long breath of relief; and Will said, dispassionately: "That was well done. It was sharp." Just then the _Dido_ ran up on the sand, and stopped with a shock that would have pitched Will overboard if he had not grasped the mast. "Now we've done it, Reube!" he exclaimed. "We're aground hard and fast, just when there's no longer any need of being here. I fancy we won't undertake to follow Mr. Gandy through these honey pots." Reube made no direct answer. He was on his feet watching the fugitive, anxiously. "Ah-h-h!" he cried, "he's got it. He'll never get through that patch of death traps along there." The words were scarcely out of his mouth when Gandy seemed to wallow forward as if the ground had given way beneath him. With a mighty heave of his body he tried to throw himself backward as he had done before. But this time he was too late The hungry, greenish-red ooze but lipped and clung to him more greedily. He flung himself flat, rolled on his side, and strove to drag one leg free. With the effort his other leg sank up to the thigh. Then he lifted his face and uttered a shriek of heart-shaking horror. Reube and Will sprang out upon the sand, Will grabbing up the boat hook as he did so. Reube snatched it from his hand. "Go back," he cried, "and get a rope, and follow me carefully right in my tracks. I know this cove and you don't." The next moment he was speeding like the wind to the spot where Gandy lay writhing in that inexorable grasp. CHAPTER XIII. Gandy is Rescued from the Honey Pots. WILL was but a few seconds in getting the necessary rope out of the cuddy. Then, taking an oar with him, he followed Reube as fast as he could run, casting wary eyes at the oily patches which were dotted around his path. The wretch in the honey pots had evidently no thought that his enemies would attempt his rescue. When he saw them approaching he thought they came to mock him or to gloat over his last agony, and he nerved himself to control the terror which had unmanned him. Then he saw the boat hook, the oar, the rope, and he knew that these meant help if help were possible. A wild hope, mixed with wonder, lit up his deep-set eyes. Could it be that Reube Dare would try to save him after all that he had done? To let him perish would be just, and so easy and so safe. To help him would be perilous indeed, for no one could go among the honey pots without taking his life in his hands; and yet here was Reube, here was that interfering Carter chap, running toward him as if there were no such things as honey pots. He could not understand it. The deadly mud was sucking, sucking, sucking at his feet, his knees, his thighs. It was like dumb, insatiable tongues of strange monsters curling about him. Nevertheless, he half forgot the horror in a new feeling which broke upon his spirit, and this emotion spoke in his eyes as Reube arrived at the edge of the honey pot. Reube saw it, and it insensibly softened his voice as he said: "Keep up your nerve now, and we'll get you out all right." At the same time he stretched out the boat hook, which Mart grasped with desperate strength, pressing it to his breast with his one sound arm. Flinging all his weight into the pull, Reube surged mightily on the boat hook. But his utmost force produced no effect. The pull of the twisting mud was mightier. Instead of extricating Gandy, even by an inch, he found himself sinking. He was on treacherous ground. With a quick wrench he freed the leg that was caught by dragging it from its boot. Then, leaving the boot where it was, he ran around to the other side of the honey pot and felt for firm standing ground. As he did so, Will came up breathing quickly. "Be keerful on your right!" cried Gandy, sharply, and Will sprang aside, just avoiding a bad spot. "Thanks, Gandy," he remarked, in a casual way, as if Gandy had picked up his hat for him or handed him a match. Then he flung a coil of rope, saying: "Fix the end of that under your arms; fix it firm, so that it won't slip." Then he went round the honey pot to where Reube was standing, with pale brow knitted closely. "What are we going to do?" asked Reube. "I can't budge him." Gandy, in spite of shattered arm, had succeeded in fastening the rope about his waist, and now, placing the long, light shaft of the boat hook in front of him, was bearing down upon it as hard as he could. "That's a good idea," cried Will. "But here, Mart, the oar will be better because it's bigger round and flat in the blade. Fling us the boat hook and take the oar!" These efforts, though they had not at all availed to extricate the victim, had kept him from being dragged further down. With the oar he was able to exert his strength to more advantage. Will now made a loop in the rope and passed the handle of the boat hook through it. Then, one on each side of the rope, and each with the shaft across his breast, so that the whole formed a sort of rude harness, Will and Reube bent their bodies to the pull like oxen in a yoke. At the same time Gandy, using his unwounded arm, lifted with all the force that despair could give him. For two or three seconds there was no result. Was it all to be in vain? Then from Gandy's white lips came a gasping cry of "She gives!" and slowly, slowly at first, then with a sudden yielding which nearly threw the rescuers to the ground, that terrible hold gave way, and Gandy, was jerked forward upon solid ground. White and panting from the strain, they turned to free him from the rope. He had fainted and lay as if dead. The anguish of his wound and of his terror and the gigantic effort which he had just put forth had overcome him. [Illustration: Will and Reube bent their bodies to the pull.] "Let's get the poor wretch down to the water," proposed Will. "We'll take him right aboard the _Dido_, where we can see to his arm and fix him a place in the cuddy," said Reube. "The _Dido's_ hard and fast now for another six hours, so we can take our time. But I wish we could get the chap to a doctor sooner than that." So saying, he picked up Gandy's long form and walked with it easily down to the boat. The wounded man was still unconscious. A bed of quilts was fixed for him, and Reube was just about to cut the sleeve from his shirt to examine the arm and bathe it when Will cried: "Hold on a minute, Reube. The way the boat lies now I think we can pry her off with the oar. See how the sands dip away on the outside." He was right. Using the big oar as a lever, they got the _Dido_ afloat in a very few moments. Then Reube said: "You sail the boat, Will, and I'll see to the patient." "You had better let me attend to him while you steer," suggested Will. "No," said Reube; "he's my own private enemy, and I must look after him myself. You see to the boat." And Will obeyed without more ado. Had they been watching Gandy's face they would have seen the eyes open and instantly close again. But Reube was delicately cutting the sleeve away and Will was watching the process, the sail, and the _Dido's_ course all at the same time. Gandy was conscious, but in a faint way he was wondering over the situation in which he found himself. Presently he heard Will speak again: "Well, now you've got him, and the poor rascal is a good deal worse for wear. I can't for the life of me see what you're going to do with him." Will's voice was kind, in a bantering way. He found it hard to maintain a proper degree of righteous indignation against a man whose life he had just saved. And that helpless arm he could not but contemplate with pity. "I'm going to get him home and into the doctor's hands," said Reube. "It seems to me he's punished enough this time, and maybe he'll realize it. Anyway, I'm not going to take action against him after all the trouble we've had to save him. We'll just say nothing about that shot from the rocks till we see how he turns out when he gets well. If there's any good in him, this experience ought to bring it out. And there must be some good streak in a fellow that's faithful to his family the way Mart is." By this time the arm was bare, and Reube was bathing it tenderly. Then, covering the wound with a wet compress, he bandaged it loosely and rose to fix a shelter over the patient's face. To his amazement the tears were rolling down Gandy's sallow cheeks. "What's the matter, Mart? Feeling worse?" he inquired, anxiously. But Gandy made no reply. He covered his face with his one available arm, and Reube could perceive his thin lips working strangely. Having seen that he was as comfortable as he knew how to make him, Reube seated himself by Will in the stern. Save for a few chance and commonplace remarks, there was silence between the two comrades for an hour, while the _Dido_ sped merrily homeward. They had enough to occupy their thoughts in that day's adventures, but they did not wish to talk of what their captive could hardly like to hear about. At last Will remarked: "It's warm, Reube, and your patient must be thirsty." "That's so," said Reube, springing up. With a tin of fresh water he stepped over to Gandy's side, slipped an arm under his head to raise it, and said: "Here, Mart, take a sup to cool your lips. They look parched." Instead of complying, Gandy grasped and clung to the hand that held the cup. "Forgive me," he begged. "Reube Dare, forgive me. I never knowed what I was doin'. To think of all I've done to you, an' then you to treat me like this!" And he covered his face again. "Mart," said Reube, more moved than he was willing to let appear, "never mind about that now. We'll let bygones be bygones. Here's my hand on it." And he grasped the hand that hid Mart's eyes. In his weakness Gandy was so overcome that he tried to laugh just while he was struggling not to cry, and he made a poor mixture of the attempt. But, raising himself for a second on his elbow, he managed to murmur unsteadily: "I can't talk, but, 'fore God, I'll show you both what I think of yous." And Mart Gandy kept his word through after years of loyal devotion to these two young men who on this day had taught him a new knowledge of the human heart. An ambition to seem worthy in their eyes led him to mend his life, and the Gandy name soon grew in favor throughout the Tantramar countryside. As for the _Dido_, fate looked kindly on her trips all that season and for several seasons thereafter. That autumn Reube took his mother to Boston. Mrs. Carter, with Will and Ted, went at the same time; and after a simple operation, much less painful than had been expected, Mrs. Dare regained the perfect use of her eyes. On their return to the Tantramar Will and Ted set out again for college, and this time Reube went with them. His _Dido_ had proved herself a fair match for the new marsh in the matter of giving her master an education. During successive summer holidays she carried Reube and Will and Ted on many a profitable and merry trip, but never again did she experience one so eventful as that with which she began her career as a Tantramar shad boat. THE END. TRANSCRIBER NOTES Misspelled words and printer errors have been corrected. Inconsistencies in punctuation have been maintained. Some illustrations were moved to facilitate page layout. A cover was created for this eBook. [The end of _Reube Dare's Shad Boat: A Tale of the Tide Country_, by Charles G. D. (George Douglas) Roberts.] End of the Project Gutenberg EBook of Reube Dare's Shad Boat, by Charles George Douglas Roberts ***
{ "redpajama_set_name": "RedPajamaBook" }
4,535
Welcome to SMILE-WIDE ====================== SMILE-WIDE is a <a href="http://en.wikipedia.org/wiki/Bayes_network">Bayesian network</a> library. Initially, SMILE-WIDE is a version of the well known <a href="http://genie.sis.pitt.edu">SMILE library</a>, augmented <b>W</b>ith <b>I</b>ntegrated <b>D</b>istributed <b>E</b>xecution. This allows execution on very large datasets. As SMILE-WIDE is developed, BigData-specific capabilities will surpass the standard Bayesian network interfaces. Programmer-facing, SMILE-WIDE is a .jar library which you can include in your software. User-facing, it is also integrated into Hive as a UDF to provide posterior probabilities of missing values, given the observed values for each instance. SMILE-WIDE uses Hadoop for inference on large data. It is written in Java using the underlying SMILE library. The version 1.0 SMILE-WIDE is using SMILE library written in C++. We are working on design and implement SMILE in Java for better performance of SMILE-WIDE job execution on Hadoop platform. Contents -------- <ul> <li><a href="#how-to-build-the-software">How to build the software</a></li> <li><a href="#how-to-run-an-example-smile-wide-hadoop-job">How to run an example SMILE-WIDE Hadoop job</a></li> <li><a href="#how-to-test-hive-integration">How to test Hive integration</li> <li><a href="#problems-and-solutions">Problems and solutions</li> <li><a href="#how-to-generate-javadoc-api-documentation">How to generate Javadoc API documentation</a></li> </ul> Please contact the authors with any questions or problems: <!-- thanks to http://www.google.com/recaptcha/mailhide/ --> * <a href="mailto:haiqin.wang@boeing.com">Haiqin Wang</a> * <a href="mailto:robert.e.cranfill@boeing.com">Rob Cranfill</a> * <a href="mailto:shooltz@gmail.com">Tomasz Sowinski</a> * <a href="mailto:m.a.dejongh@gmail.com">Martijn de Jongh</a> * <a href="mailto:marek@sis.pitt.edu">Marek Druzdzel</a> How to build the software ------------------------- SMILE-WIDE is an Eclipse project configured to use Maven. All external dependencies are pulled from the appropriate Maven repositories. The code can be built from the IDE or directly from the command line. The basic build can be started with the following command: ``` mvn clean package</code> ``` This creates two jars in the target directory and copies the appropriate native library to the <code>target/lib</code> directory. The binary files are: <ul> <li><code>smile-wide-0.0.1-SNAPSHOT.jar</code></li> <ul> <li> Contains the SMILE-WIDE code </li> </ul> </li> <li><code>smile-wide-0.0.1-SNAPSHOT-job.jar</code></li> <ul> <li> Contains the SMILE-WIDE code and the core SMILE jar in its lib subdirectory. This makes running SMILE-WIDE-based Hadoop jobs easier, because Hadoop will automatically add SMILE jar to the classpath on the machines running in the cluster. </li> </ul> </li> <li><code>libjsmile.so</code>, <code>libjsmile.jnilib</code> or <code>jsmile.dll</code></li> <ul> <li> JNI library containing the C++ SMILE code</li> </li> </ul> </ul> It's possible to build for a platform different from the one running the Maven by overriding the <code>smile.native.platform</code> variable. For example, when building for Hadoop on 64-bit Linux cluster with Maven or Eclipse running on OSX, the command should be extended to: ``` mvn clean package -Dsmile.native.platform=linux64 -Dmaven.test.skip=true ``` How to run an example SMILE-WIDE Hadoop job ------------------------------------------- The example below executes a Hadoop job loop which learns the parameters of probability distributions for the <code>kiva.xdsl</code> network. Note that the jar file contains the SMILE jar in its lib directory. However, the native library must be explicitly added to the job's distributed cache with the Hadoop's <code>-files</code> option. Additionally, since the specifics of EM require the access to SMILE functionality locally, the <code>.so</code> file should be copied to the <code>$HADOOP_BIN/native</code> directory. ``` hadoop jar smile-wide-0.0.1-SNAPSHOT-job.jar smile.wide.algorithms.em.RunDistributedEM \ -files em-tmp.xdsl,libjsmile.so \ -D mapred.max.split.size=250000 -D mapred.reduce.tasks=12 \ em.initial.netfile=kiva.xdsl em.work.netfile=em-tmp.xdsl \ em.data.file=pitt/kiva500k.txt em.stat.file=pitt/em-out \ em.separator=9 em.local.stat.file=em-local.txt ``` The file <code>kiva.xdsl</code> is located in the project's <code>input</code> directory; <code>pitt/kiva500k.txt</code> is in the compute cluster's HDFS. The output of the job is the local file named <code>em-tmp.xdsl</code>, containing the modified <code>kiva.xdsl</code> network with learned parameters. How to test Hive integration ---------------------------- To test the Hive UDFs, execute the normal maven package build followed by <code>runscripts/hivePosteriors.sh.</code> This creates the <code>target/hive-test</code> directory, containing all the files required for UDF test. The command to run the test is: ``` hive -f hivePosteriors.q ``` Hive will import small data file and perform four queries, each calling into SMILE-WIDE UDFs. Problems and Solutions ---------------------- <h3>java.lang.UnsatisfiedLinkError: no jsmile in java.library.path</h3> This exception is caused by missing native library. The platform-specific library is placed in <code>target/lib</code> during the Maven build, but Hadoop and Hive must be made aware of its existence. This is done with the Hadoop's <code>-files</code> option or Hive's 'ADD FILE'. Some of SMILE-WIDE algorithms contain nontrivial local component running within the Hadoop's client JVM. In such case the shared library should be added to <code>$HADOOP_BIN/native directory</code> How to generate Javadoc API documentation ----------------------------------------- The SMILE-WIDE API Javadoc documentation can be generated from the command line. With 'javadoc' on the path, issue the following command: ``` javadoc @options.javadoc.text ``` This will generate HTML documentation in the 'javadocs' directory.
{ "redpajama_set_name": "RedPajamaGithub" }
3,902
package org.onetwo.common.spring.context; import java.lang.annotation.Annotation; import java.util.List; import java.util.stream.Collectors; import java.util.stream.Stream; import org.onetwo.common.log.JFishLoggerFactory; import org.onetwo.common.spring.SpringUtils; import org.onetwo.common.utils.LangUtils; import org.slf4j.Logger; import org.springframework.beans.factory.BeanClassLoaderAware; import org.springframework.beans.factory.annotation.AnnotatedBeanDefinition; import org.springframework.beans.factory.config.BeanDefinition; import org.springframework.beans.factory.config.BeanDefinitionHolder; import org.springframework.beans.factory.support.AbstractBeanDefinition; import org.springframework.beans.factory.support.BeanDefinitionBuilder; import org.springframework.beans.factory.support.BeanDefinitionReaderUtils; import org.springframework.beans.factory.support.BeanDefinitionRegistry; import org.springframework.context.EnvironmentAware; import org.springframework.context.ResourceLoaderAware; import org.springframework.context.annotation.ImportBeanDefinitionRegistrar; import org.springframework.core.annotation.AnnotationAttributes; import org.springframework.core.env.Environment; import org.springframework.core.io.ResourceLoader; import org.springframework.core.type.AnnotationMetadata; import org.springframework.util.StringUtils; /** * @author wayshall * <br/> */ abstract public class AbstractImportRegistrar implements ImportBeanDefinitionRegistrar, BeanClassLoaderAware, ResourceLoaderAware, EnvironmentAware { public static final String ATTRS_NAME = "name"; protected Logger logger = JFishLoggerFactory.getLogger(this.getClass()); protected ResourceLoader resourceLoader; protected ClassLoader classLoader; protected Environment environment; protected AnnotationMetadataHelper annotationMetadataHelper; private Class<? extends Annotation> importingAnnotationClass; private Class<? extends Annotation> componentAnnotationClass; // @SuppressWarnings("unchecked") // protected AbstractImportRegistrar() { // super(); // Class<? extends Annotation>[] paramClasses = (Class<? extends Annotation>[])TypeResolver.resolveRawArguments(AbstractImportRegistrar.class, getClass()); // this.importingAnnotationClass = paramClasses[0]; // this.componentAnnotationClass = paramClasses[1]; // } protected AbstractImportRegistrar() { } protected AbstractImportRegistrar(Class<? extends Annotation> importingAnnotationClass, Class<? extends Annotation> componentAnnotationClass) { super(); this.importingAnnotationClass = importingAnnotationClass; this.componentAnnotationClass = componentAnnotationClass; } final protected void setImportingAnnotationClass(Class<? extends Annotation> importingAnnotationClass) { this.importingAnnotationClass = importingAnnotationClass; } final protected void setComponentAnnotationClass(Class<? extends Annotation> componentAnnotationClass) { this.componentAnnotationClass = componentAnnotationClass; } public void setResourceLoader(ResourceLoader resourceLoader) { this.resourceLoader = resourceLoader; } @Override public void setBeanClassLoader(ClassLoader classLoader) { this.classLoader = classLoader; } /**** * EnableWechatClient * * @author wayshall * @return */ protected Class<? extends Annotation> getImportingAnnotationClass(){ return importingAnnotationClass; } /**** * * @author wayshall * @return */ protected Class<? extends Annotation> getComponentAnnotationClass(){ return componentAnnotationClass; } public AnnotationMetadataHelper getAnnotationMetadataHelper(AnnotationMetadata importingClassMetadata){ AnnotationMetadataHelper annotationMetadataHelper = this.annotationMetadataHelper; if(annotationMetadataHelper==null){ annotationMetadataHelper = createAnnotationMetadataHelper(importingClassMetadata); this.afterCreateAnnotationMetadataHelper(annotationMetadataHelper); this.annotationMetadataHelper = annotationMetadataHelper; } return annotationMetadataHelper; } protected AnnotationMetadataHelper createAnnotationMetadataHelper(AnnotationMetadata importingClassMetadata) { AnnotationMetadataHelper annotationMetadataHelper = new AnnotationMetadataHelper(importingClassMetadata, getImportingAnnotationClass()); annotationMetadataHelper.setResourceLoader(resourceLoader); annotationMetadataHelper.setClassLoader(classLoader); return annotationMetadataHelper; } protected void afterCreateAnnotationMetadataHelper(AnnotationMetadataHelper annotationMetadataHelper){ } protected List<BeanDefinition> scanBeanDefinitions(AnnotationMetadata importingClassMetadata){ return getAnnotationMetadataHelper(importingClassMetadata).scanBeanDefinitions(getComponentAnnotationClass()); } /*** * 在注册bean之前初始化 * @author weishao zeng * @param importingClassMetadata * @param registry */ protected void initBeforeRegisterBeanDefinitions(AnnotationMetadata importingClassMetadata, BeanDefinitionRegistry registry) { } @Override public void registerBeanDefinitions(AnnotationMetadata importingClassMetadata, BeanDefinitionRegistry registry) { initBeforeRegisterBeanDefinitions(importingClassMetadata, registry); Class<? extends Annotation> componentAnnoClass = getComponentAnnotationClass(); List<BeanDefinition> beandefList = scanBeanDefinitions(importingClassMetadata); beandefList.stream().filter(AnnotatedBeanDefinition.class::isInstance) .forEach(bd->{ AnnotatedBeanDefinition beanDefinition = (AnnotatedBeanDefinition) bd; AnnotationMetadata annotationMetadata = beanDefinition.getMetadata(); checkComponent(componentAnnoClass, annotationMetadata); AnnotationAttributes tagAttributes = SpringUtils.getAnnotationAttributes(annotationMetadata, componentAnnoClass); registerComponent(registry, importingClassMetadata, annotationMetadata, tagAttributes); }); } protected void checkComponent(Class<? extends Annotation> componentAnnoClass, AnnotationMetadata annotationMetadata) { // Assert.isTrue(annotationMetadata.isInterface(), // "@"+componentAnnoClass.getSimpleName()+" can only be specified on an interface"); } /*** * * @author wayshall * @return */ abstract protected BeanDefinitionBuilder createComponentFactoryBeanBuilder(AnnotationMetadata importingClassMetadata, AnnotationMetadata componentAnnotationMetadata, AnnotationAttributes attributes); protected void registerComponent(BeanDefinitionRegistry registry, AnnotationMetadata importingClassMetadata, AnnotationMetadata componentAnnotationMetadata, AnnotationAttributes tagAttributes) { String className = componentAnnotationMetadata.getClassName(); String beanName = resolveName(tagAttributes, className); if (registry.containsBeanDefinition(beanName)) { if(logger.isInfoEnabled()){ logger.info("api client has registed, ingored. beanName: {}, class: {}", beanName, className); } return ; } else { if(logger.isInfoEnabled()){ logger.info("register api client beanName: {}, class: {}", beanName, className); } } BeanDefinitionBuilder definition = createComponentFactoryBeanBuilder(importingClassMetadata, componentAnnotationMetadata, tagAttributes); if (definition==null) { return ; } String alias = beanName + "-" + getComponentAnnotationClass().getSimpleName(); AbstractBeanDefinition beanDefinition = definition.getBeanDefinition(); beanDefinition.setPrimary(true); BeanDefinitionHolder holder = new BeanDefinitionHolder(beanDefinition, beanName, new String[] { alias }); BeanDefinitionReaderUtils.registerBeanDefinition(holder, registry); } final protected String resolveName(AnnotationAttributes attributes, String defName) { if (!attributes.containsKey(ATTRS_NAME)) { return defName; } String name = attributes.getString(ATTRS_NAME); if (!StringUtils.hasText(name)) { name = defName; } name = resolve(name); return name; } final protected String resolve(String value) { // return SpringUtils.resolvePlaceholders(resourceLoader, value); return SpringUtils.getPropertyOrResolveValue(resourceLoader, value); } /**** * 如果是属性,则从配置中获取; * 如果是表达式,则直接作为值解释; * 其它原样返回 * @author weishao zeng * @param propertyOrPlaceholderValue * @return */ final protected String getPropertyOrResolveValue(String propertyOrPlaceholderValue) { return SpringUtils.getPropertyOrResolveValue(resourceLoader, propertyOrPlaceholderValue); } final protected String getRequiredPropertyOrResolveValue(String propertyOrPlaceholderValue) { return SpringUtils.getRequiredPropertyOrResolveValue(resourceLoader, propertyOrPlaceholderValue); } final protected String resolveAttributeAsString(AnnotationAttributes attributes, String attrName) { String value = attributes.getString(attrName); if (!StringUtils.hasText(value)) { return value; } return resolve(value); } final protected String[] resolveAttributeAsStringArray(AnnotationAttributes attributes, String attrName) { String[] values = attributes.getStringArray(attrName); if (LangUtils.isEmpty(values)) { return LangUtils.EMPTY_STRING_ARRAY; } return Stream.of(values).map(v -> resolve(v)).collect(Collectors.toList()).toArray(new String[0]); } @Override public void setEnvironment(Environment environment) { this.environment = environment; } }
{ "redpajama_set_name": "RedPajamaGithub" }
6,713
Les sœurs de la sainte union des Sacrés Cœurs (en latin : sorores sanctae unionis) sont une congrégation religieuse féminine enseignante de droit pontifical. Historique La congrégation est fondée en 1828 à Douai par l'abbé Jean-Baptiste Debrabant pour l'enseignement de la jeunesse. Le titre de la congrégation est le souhait du fondateur que les sœurs soient unies comme les cœurs de Jésus et de Marie. En 1842, Pierre Giraud approuve les constitutions de la congrégation qui se propage rapidement en France et en Belgique. Grâce aux bénédictins anglais de Douai, les sœurs ouvrent en 1859 des maisons en Grande-Bretagne et en Irlande en 1863. L'institut reçoit le décret de louange le et l'approbation finale du Saint-Siège le , ses constitutions sont définitivement approuvées le . Fusion Deux congrégations fusionnent avec elles: 1950 : Dames de Flines. date indéterminée : Sœurs de saint Joseph de Nazareth de Valenciennes. Activités et diffusion Les sœurs se consacrent à l'enseignement. Elles sont présentes en : Europe : Belgique, France, Irlande, Italie, Royaume-Uni. Amérique : Argentine, États-Unis, Haïti. Afrique : Bénin, Cameroun, Tanzanie. La maison généralice est à Rome. En 2017, la congrégation comptait 331 sœurs dans 58 maisons. Notes et références Congrégation catholique féminine Congrégation enseignante catholique Fondation en 1828
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,921
Lothar Bretterbauer (* 3. Mai 1953 in Lübben (Spreewald)) ist ein deutscher Kommunalpolitiker (1982–1990 DBD, seit 1991 CDU). Er war von 1990 bis 2014 Bürgermeister von Lübben. Leben und Werk Lothar Bretterbauer wurde als Sohn von Leopold und Edeltraud Bretterbauer geboren. Er besuchte die Polytechnische Oberschule seiner Heimatstadt von 1959 bis 1969. Danach absolvierte er von 1969 bis 1972 eine Berufsausbildung mit Abitur zum Wirtschaftskaufmann in Finsterwalde und studierte nach dem 18-monatigen Grundwehrdienst (1972–74) an der Handelshochschule Leipzig. 1978 schloss er als Diplom-Ökonom ab. Er wurde als Ökonom bei der Handelskette Konsum angestellt und war 1983 bis 1986 Hauptbuchhalter in der Landwirtschaftlichen Produktionsgenossenschaft (LPG) in Straupitz. Von 1986 bis 1990 war er Stadtrat für Umweltschutz, Wasser und Energie beim Rat der Stadt Lübben. 1990 wurde er zum Bürgermeister seiner Heimatstadt gewählt und 1993, 2002 sowie 2010 bestätigt. Ende August 2014 wurde er aus gesundheitlichen Gründen in den Ruhestand versetzt. Bretterbauer ist verwitwet und hat drei Kinder. Seine Hobbys sind die Aquaristik, Musik, Bibelfilme und altertümliche Schriften. Auszeichnungen 1997 Ehrennadel der Robert-Koch-Stiftung Wolsztyn in Gold 2003 Ehrenbürger der Stadt Wolsztyn Literatur Who is Who in der Bundesrepublik Deutschland. VI. Ausgabe, 1999, S. 376, ISBN 3-7290-0026-8 Weblinks Homepage von Lothar Bretterbauer Bürgermeister (Lübben (Spreewald)) CDU-Mitglied Ehrenbürger in der Woiwodschaft Großpolen Deutscher DDR-Bürger Geboren 1953 Mann
{ "redpajama_set_name": "RedPajamaWikipedia" }
932
\section{Introduction} The standard cosmological model, comprising the still unknown dark energy and dark matter, has been successful in describing the large scale structure of the Universe and its properties \citep[$>$1 Mpc, e.g.][]{Komatsu2011,PlanckCollaboration2018vi}. The dark matter component in particular, plays an important role throughout cosmic evolution by participating in the collapse of baryons via gravitational instability to form galaxies \citep{White1978}. Verifying the validity of the current Cold Dark Matter paradigm down to sub-galactic scales, and what this implies for the microscopic properties of the dark matter particle, is masked by the onset of highly non-linear physical mechanisms attributed to baryons, e.g. stellar winds, supernovae, feedback from Active Galactic Nuclei, etc, that appear in such high density environments \citep{Vogelsberger2014,Schaye2015}. The tension between dark matter theory and observations on galactic and sub-galactic scales \citep[$<$1 Mpc,][]{Bullock2017} has several manifestations, e.g. the ``missing satellites'' \citep{Moore1999,Klypin1999}, the ``cusp-core'' \citep{Moore1994,Oh2015}, the ``too-big-to-fail'' \citep{BoylanKolchin2011}, and the ``bulge-halo conspiracy'' \citep{Dutton2014} problems. Regardless of the role of baryons and their gravitational interactions with dark matter in each case, aspects of which constitute independent major research fields \citep[e.g. the efficiency of star formation,][]{McKee2007}, measuring the overall shape and smoothness of the mass density in galaxies is critical. In the local Universe, this can be achieved by understanding the statistics \citep[e.g.][]{Papastergis2015}, instrumental effects \citep{Kim2018}, and dynamics \citep[e.g.][]{Helmi2012} of dwarf galaxies and stellar streams \citep[e.g.][]{Carlberg2012,Erkal2016}. As soon as one leaves the neighbourhood of the Milky Way, the only way to achieve such measurements is via gravitational lensing - the deflection of light from a distant source by the intervening mass of a galaxy. In this way, the overall shape of the total mass distribution has been measured for massive elliptical galaxies \citep{Koopmans2006,Koopmans2009,Gavazzi2007,Auger2010,Barnabe2011,Sonnenfeld2013,Suyu2014,Oldham2018} and massive substructures down to the order of $10^8$ M$_{\odot}$ have been detected out to cosmological distances \citep{Vegetti2010,Vegetti2012,Fadely2012,MacLeod2013,Nierenberg2014,Li2016,Hezaveh2016b,Birrer2017}. Strong lensing analysis has also been combined with other techniques, e.g. stellar kinematics \citep{Barnabe2011,Yildirim2020}, stellar kinematics and weak lensing \citep{Sonnenfeld2018}, stellar population analysis \citep{Barnabe2013,Smith2015,Spiniello2015}, and quasar microlensing \citep{Oguri2014}, in order to disentangle the baryonic and dark mass components. The gravitational imaging technique \citep{Koopmans2005,Vegetti2009a} is a powerful method to study the non-smoothness of the lensing mass distribution, analyzing perturbations of lensing features, such as arcs and Einstein rings\footnote{This can also be achieved by analyzing flux ratios from lensed quasars \citep[][]{Dalal2002}, however, this requires carefully planned spectroscopic observations, taking into account the possible effect of microlensing \citep[e.g.][]{Nierenberg2014}.}. Based on the semi-linear inversion method of \citet{Warren2003}, which can reconstruct the light distribution of the lensed source on a grid once the lensing potential is given, \citet{Koopmans2005} provided an extension that simultaneously obtains a grid-based reconstruction of potential perturbations to an overall smooth (parametric) lens potential: in the presence of substructure, dark or luminous, the smooth modelling residuals are remodelled in terms of lens potential perturbations using the smooth potential and its corresponding source as a starting point. \citet{Vegetti2009a} improved this technique in a number of ways, expanding the work of \citet{Suyu2006} by casting the problem in a Bayesian framework that includes the potential perturbations and using an adaptive grid for the source. With careful control over the regularization level of the solutions, the presence of substructure in a lens can be uncovered by accumulating small potential corrections within an iterative scheme. The detection is then justified by comparing the Bayesian evidence to the best purely smooth lensing model \citep{Vegetti2010}. The regularization scheme plays a critical role in such a strong lensing Bayesian analysis approach, as it enables the matrix inversions to find a unique solution \citep{MacKay1992,MacKay2003}. Focusing only on the reconstruction of the source, there are several pixel-based methods\footnote{The possibility of using basis sets to reconstruct the source has been explored by \citet{Birrer2015,Joseph2019,Galan2021} and the use of deep neural networks was investigated by \citet{Morningstar2019}. Both methods do not explicitly require regularization, but rely on the number of independent basis vectors and a descriptive training set respectively, to model higher order statistics of the source.} that employ a brightness, gradient, or curvature based regularization scheme, or a combination thereof \citep{Dye2005,Suyu2006,Vegetti2009a,Tagore2014,Nightingale2015,Yildirim2020}, i.e they assume that each of these source properties is drawn from a normal distribution, whose variance is determined by the regularization parameter that itself can be optimized for, and whose correlation properties are set by a corresponding covariance matrix. However, a poor choice of the regularization parameter in each case is known to cause problems with over- and under- fitting of the data in some cases, which in turn might affect the mass model parameters \citep{Nightingale2015}. \citet{Suyu2006} solve exactly for the value of the regularization parameter that maximizes the evidence. To allow for more flexibility, \citet{Nightingale2018} have introduced a non-constant (adaptive) regularization scheme, whose principle is to vary the strength of the regularization (width of the normal distribution) across the source, based on its surface brightness profile. Some form of regularization is necessary to be able to solve the equations, however all of these methods are equivalent to setting priors for the different source properties that are not necessarily astrophysically motivated. Upon combining the source reconstruction with potential perturbations, which enter the equations in a very similar way to the source and require their own regularization scheme, an additional non-linear dependence of the perturbations on the source is introduced \citep{Koopmans2005}. Again, the regularization of the two fields, the source and the perturbations, plays an important role in reaching a unique solution. \citet{Vegetti2009a} follow a line-search optimization, starting with finding the best smooth lens-mass model and then proceeding with calculating potential corrections based on the corresponding source (see also equation \ref{eq:dpsi_residuals} here). In their iterative scheme, the source and potential perturbations are solved for at each step and then updated: the new surface brightness derivatives are calculated across the source and the perturbations are added to the overall smooth potential in the form of corrections. The regularization parameter of the perturbations is carefully controlled, initially set to very high values (very smooth fields) and later reduced to allow for more structure. This is similar to a Gauss-Newton optimization scheme that is known to be sensitive to the step size; any spurious structure appearing in the solutions would be added to the overall lensing potential with the risk of irrecoverably drifting away from the true solution. Although this is a powerful approach, it is limited by two caveats: some manual fine-tuning is needed in setting up the algorithm to converge to a meaningful solution, and there is no obvious means to quantify degeneracies between the reconstructed source and the potential perturbations. The latter is inherent to the technique and has not been studied in depth before \citep[see][for an example]{Chatterjee2019}. In this paper, we more rigorously investigate the importance of new forms of regularization, introducing more realistic priors on the source surface brightness distribution that are more flexible in capturing higher order statistical properties, and a statistical approach to finding the best regularization parameters via sampling. The latter is based on the theory of Gaussian Process Regression \citep{Rasmussen2006} and is quite powerful as it provides a way to quantify degeneracies between the source and perturbation fields. In addition, this sampling approach is better suited to describe extended perturbations, which are not necessarily restricted to compact and well-localized perturbers that might be more accurately detected by an iterative and additive scheme \citep[as in][]{Vegetti2009a}. The outcome is a statistical approach to generic perturbations of a smooth lensing potential, which can be directly linked to the underlying statistical properties of baryonic and dark matter (e.g. via the power spectrum), or to higher order structure in the lens potential, such as the presence of a galactic disc \citep[as was recently found by][]{Hsueh2017}. The structure of the paper is as follows. In Section \ref{sec:method} we set up the theoretical framework, provide the Bayesian evidence equation extending the work of \citet{Suyu2006} and \citet{Vegetti2009a}, and demonstrate the use of this approach under various regularization schemes. Section \ref{sec:results} presents a set of selected applications of the method on mock lens systems, which are discussed further in Section \ref{sec:discussion}. Our conclusions are summarized in Section \ref{sec:conclusions}. \section{Method} \label{sec:method} The Bayesian formalism applied to grid-based strong lensing analyses was introduced by \citet{Suyu2006} and \citet{Vegetti2009a}. Here, we use the same framework and repeat some of the steps, while we point out the differences, particularly with respect to the regularization and our sampling approach. In addition, an explicit equation describing the Bayesian evidence is derived, which has not appeared in the literature so far (\citealt{Suyu2006} give such an expression but including only the source). First, we formulate the problem in terms of a lensing operator depending on a parametrized smooth lens potential and a source brightness distribution defined on a grid, and then we introduce potential perturbations. Solving the resulting equations directly is an ill-posed problem. We therefore need to look for solutions minimizing some form of penalty function that includes regularization. This leads to a new set of linear equations with respect to the source and the potential perturbations that has an exact solution. The problem is then re-cast using a Bayesian formalism and the expression of the evidence is derived. The general treatment is independent of any assumption on the particular type of regularization, however, several physically motivated schemes are examined in more detail. Finally, we present a sampling approach to determine the probability distribution of all non-linear parameters of the problem. \subsection{The lensing operator and the source grid} \label{sec:grids} The problem at hand is finding how the brightness of the lensed images relates to the background source brightness via gravitational lensing, and can be cast in the following way \citep[similarly to][]{Warren2003,Koopmans2005,Vegetti2009a}: \begin{equation} \label{eq:cast_source} \boldsymbol{d} = BL(\psi) \boldsymbol{s} + \boldsymbol{n} \end{equation} where $\boldsymbol{d}$ and $\boldsymbol{n}$ are the vectors of brightness measurements (the ``data'') and the associated noise (the ``noise'') of the image pixels, $\boldsymbol{s}$ is the vector of the source brightness (the ``source''), $B$ is the blurring operator that is linked to the point spread function (PSF), and $L$ is the lensing operator that depends on the lensing potential $\psi$. The data and noise vectors correspond to a rectangular M$\times$N grid of N$_{\rm d}$ pixels in total on the image plane, which delineates the part of the pixel array of the optical detector covering the lensed images. The blurring operator (N$_{\rm d} \times $N$_{\rm d}$) is assumed constant\footnote{The PSF can in fact vary for each pixel based on the spectral energy distribution of the source for that specific pixel, or due to atmospheric effects if we deal with ground-based observations.} and mimics the effect of the PSF; it acts on (blurs) the resulting image plane pixels with a fixed weighting scheme, after the source has been lensed. Assuming that the source can also be described by a pixelated grid of N$_{\rm s}$ pixels and arbitrary form on the (unobserved) source plane, then the lensing operator (N$_{\rm d} \times$ N$_{\rm s}$) couples each data pixel position to the source grid via the lens equation \citep{Vegetti2009a}. This can introduce multiplicity because different image pixels can be associated with the same source location, thus creating multiple images. Equation (\ref{eq:cast_source}) is a linear transformation between the image and source planes that depends on the gradient of the lensing potential $\psi$. We note that the lensing potential is typically a non-linear function of the lens plane coordinates, $\boldsymbol{x}$, and some parameters, $\boldsymbol{\eta}$, that can vary in complexity. Once the positions of the data pixels are traced back to the source plane, they are matched to pixels on the source grid via an interpolation scheme that guarantees the conservation of surface brightness \citep[see fig. 1 in][]{Koopmans2005}. The source grid can have any arbitrary structure, e.g. fixed or free-floating regular grids, irregular, adaptive, etc. On a regular grid, bi-linear interpolation is sufficient, while higher order schemes could also be used (e.g. bi-cubic, natural neighbour, etc). An irregular grid has a unique Delaunay triangulation and its corresponding dual Voronoi tesselation, whose cells can both be considered as source ``pixels'' \citep{Gallier2011}. Data pixels that are cast back onto the source plane land inside a Delaunay triangle and their value is interpolated linearly between the triangle's vertices (the centers of the irregular Voronoi source grid ``pixels''). Hence, the brightness values inside any such triangle lie on a tilted plane defined by the values at the triangle vertices. Barycentric coordinates are used to perform these triangular interpolations, which is equivalent to the procedure described in \citet[][figs. 1 and 2]{Vegetti2009a}. An irregular source grid can also be constructed randomly \citep[e.g.][]{Nightingale2015} or by a recipe designed to facilitate the source reconstruction. An example is a so-called adaptive grid that is reconstructed every time the lens potential $\psi(\boldsymbol{\eta})$ changes. Here, we create such adaptive grids by casting back one out of every $n\times n$ block of the data pixels, with $1 \leq n < 6$ (fixed throughout the reconstruction). Alternative gridding techniques are known to affect the ``discreteness-noise'' in the computed Bayesian evidence and $\chi^2$ terms \citep{Tagore2014,Nightingale2015}. However, exploring different grids is out of the current paper's scope and left for future improvements to our method. For very large values of $n$ the resulting grid will be too coarse to successfully describe a detailed lensed image brightness distribution. For $n=1$, there is no need for any interpolation as all the data pixels have been used to create the source grid ($\mathrm{N_s = N_d}$). However, in this case the system of equations to solve is under-constrained and heavily relies on the regularization (i.e. assumed prior on the source surface brightness). Applying this procedure for any given lens potential $\psi(\boldsymbol{\eta})$ results in a set of points on the source plane representing the positions of the source brightness values $\boldsymbol{s}$ and a N$_{\rm d} \times$ N$_{\rm s}$ operator $L$, whose rows contain the interpolation weights on the source grid for each data pixel. The procedure is repeated each time the lens potential $\psi$ changes \citep{Vegetti2009a}. \subsection{Lens potential corrections} Often, an elliptical power law mass model is assumed for the lens \citep{Kassiola1993,Barkana1998}. However, such smooth lens potential models may well be too simplified to capture more detailed structure of real lenses. Deviations from smoothness could be the result of dark matter substructure or higher order moments in the mass distribution of the lens galaxy itself, originating from its morphology \citep[e.g.][find a non-negligible disc component]{Hsueh2017} or evolution history (e.g. mergers). If such deviations exist in an observed system, they will manifest themselves as residuals, $\delta\boldsymbol{d}$, left behind after modelling the lens with a smooth potential: \begin{equation} \label{eq:smooth_residuals} \delta\boldsymbol{d} = M \boldsymbol{s}_{\rm p} - \boldsymbol{d}, \end{equation} where $M \equiv M(\boldsymbol{\eta}) = B L(\boldsymbol{\eta})$, and $\boldsymbol{s}_{\rm p}$ is the solution for the source after inverting the smooth model as described in Section \ref{sec:smooth_inversion}. Such residuals will persist regardless of the smooth potential used to describe the lens, although they may be absorbed to some degree into the source surface brightness or by modifying the values of the parameters $\boldsymbol{\eta}$. If the residuals from the smooth modelling are not noise-like, then the inclusion of a new lens potential component may be warranted in order for $\delta\boldsymbol{d} \rightarrow 0$ (or, more precisely, $\delta\boldsymbol{d}$ reaching the properties of the noise). The most general treatment of such a component is assuming a potential perturbations field, $\delta\boldsymbol{\psi}$, which to first order can de described by \citep{Koopmans2005}: \begin{equation} \label{eq:dpsi_residuals} \delta\boldsymbol{d} = -B D_{\rm s}(\boldsymbol{s}_{\rm p}) D_{\rm \delta\psi} \delta \boldsymbol{\psi}, \end{equation} where $D_{\rm s}(\boldsymbol{s}_{\rm p})$ is a matrix containing the gradient of the previously known source $\boldsymbol{s}_{\rm p}$, and $D_{\rm \delta\psi}$ is the gradient operator of the potential perturbations that yield $\delta\boldsymbol{\alpha}$, the perturbative deflection angle vector \citep[see appendix A of][for a derivation of this equation]{Koopmans2005}. This equation describes how potential perturbations induce additional deflections, causing the positions in the image plane to become associated with a different position in the source plane, and hence with a different source brightness. These deflections are assumed to be small enough for the source to be well approximated by a first order Taylor expansion around the original unperturbed locations. In this way, the residual image plane brightness of the smooth model can be associated with the gradient of the source brightness and some small potential perturbation field. Equations (\ref{eq:cast_source}), (\ref{eq:smooth_residuals}), and (\ref{eq:dpsi_residuals}) can be combined to reformulate the lensing problem as \citep{Koopmans2005,Vegetti2009a}: \begin{equation} \label{eq:source_dpsi_combined} \boldsymbol{d} = M_{\rm r} \boldsymbol{r} + \boldsymbol{n}, \end{equation} where $M_{\rm r}$ is the block matrix: \begin{equation} \label{eq:combined_M} M_{\rm r} \equiv M_{\rm r}(\psi_{\rm p},\boldsymbol{s}_{\rm p}) = B [L(\psi_{\rm p}) | -D_{\rm s}(\boldsymbol{s}_{\rm p}) D_{\rm \delta\psi} ], \end{equation} and: \begin{equation} \label{eq:r} \boldsymbol{r} \equiv \begin{pmatrix} \boldsymbol{s} \\ \delta\boldsymbol{\psi} \\ \end{pmatrix}. \end{equation} The similarity with equation (\ref{eq:cast_source}) is striking, however, there is one important difference: some prior knowledge of the source brightness is necessary to construct the matrix $D_{\rm s}(\boldsymbol{s}_{\rm p})$. The lens potential $\psi_{\rm p}$ can either depend on $\boldsymbol{\eta}$, as is the case in equation (\ref{eq:cast_source}), or it can include accumulated corrections $\delta\boldsymbol{\psi}_{\rm p}$ derived at previous stages - similarly to a Gauss-Newton scheme where a small update to the previous solution is calculated via a linear extrapolation. The $\delta\boldsymbol{\psi}$ field can be approximated by N$_{\rm \delta\psi}$ pixels on the image plane, which we here assume to be on a fixed regular P$\times$Q grid (as opposed to, for example, being adaptive). The $D_{\rm s}(\boldsymbol{s}_{\rm p})$ matrix entries are calculated at the locations of the deflected data pixels on the source grid. Similarly, the $D_{\rm \delta\psi}$ operator determines the derivatives of $\delta\boldsymbol{\psi}$ at the locations of the data pixels on the image plane. The product $D_{\rm s}(\boldsymbol{s}_{\rm p})D_{\rm \delta\psi} \delta\boldsymbol{\psi}$ is a N$_{\rm d}\times$N$_{\rm \delta\psi}$ matrix, whose rows contain the terms: \begin{equation} \label{eq:expanded_dsdpsi} [D_{\rm s}(\boldsymbol{s}_{\rm p})D_{\rm \delta\psi} \delta\boldsymbol{\psi}]_{\rm i} = \frac{\partial s_{\rm p} (\boldsymbol{y}_{\rm i})}{\partial y_{\rm 1}} \frac{\partial \delta\psi(\boldsymbol{x}_{\rm i})}{\partial x_{\rm 1}} + \frac{\partial s_{\rm p} (\boldsymbol{y}_{\rm i})}{\partial y_{\rm 2}} \frac{\partial \delta\psi(\boldsymbol{x}_{\rm i})}{\partial x_{\rm 2}}, \end{equation} where $\boldsymbol{x}$ are the data pixel coordinates on the image plane and $\boldsymbol{y}$ their corresponding source plane positions. If the data and perturbation grids coincide this matrix is diagonal, but usually the $\delta\boldsymbol{\psi}$ grid has a lower resolution such that each matrix row will contain the terms and corresponding weights resulting from a bilinear (in our case) interpolation on the $\delta\boldsymbol{\psi}$ grid (i.e. $\delta\psi(\boldsymbol{x}_{\rm i}) = \sum_{\rm j=1}^{4} w_{\rm i,j} \delta\psi_{\rm i,j}$, where the j-th index goes over the four vertices of the $\delta\boldsymbol{\psi}$ pixel encompassing the i-th data pixel). \subsection{Model inversion} \label{sec:smooth_inversion} The observed data result from the physical and instrumental processes of lensing and blurring, described as operators acting on a gridded source (their order is important), the finite detector pixel size, and the inclusion of noise with some properties (e.g. statistical Poisson noise of photon counts, correlated noise introduced at data reduction, cosmic rays, etc). Even in the absence of noise, inverting equation (\ref{eq:cast_source}) for the source is generally an ill-posed problem that does not have a unique or exact solution. One way to proceed is by searching for a source solution that minimizes a regularized penalty function. First, we define the penalty function, which is a likelihood function under the assumption of Gaussian errors in the data, excluding the perturbations $\delta\boldsymbol{\psi}$, to be the sum of a generalized $\chi^2$ and a regularization term: \begin{equation} \label{eq:penalty_source} \begin{split} G(\boldsymbol{s}) \equiv \, & G(\boldsymbol{s}|\boldsymbol{d},\boldsymbol{\eta},\boldsymbol{g}_{\rm s},\lambda_{\rm s}) \\ = \, & \frac{1}{2}(M \boldsymbol{s} - \boldsymbol{d})^T C^{-1}_{\rm d} (M \boldsymbol{s} - \boldsymbol{d}) + \frac{1}{2}\lambda_{\rm s}\boldsymbol{s}^T C^{-1}_{\rm s} (\boldsymbol{g}_{\rm s}) \boldsymbol{s}, \end{split} \end{equation} where $M$ is the operator used in equation (\ref{eq:smooth_residuals}), and $C_{\rm d}$ and $C_{\rm s}$ are the covariance matrices of the data and source, which, in the case of the source, may in general be a function of another set of non-linear regularization parameters, $\boldsymbol{g}_{\rm s}$ - we take out the regularization parameter $\lambda_{\rm s}$ to separate the effect of the source brightness from the shape of its correlations and make its effect more explicit. The parameter $\lambda_{\rm s}$ sets the level of contribution to the overall penalty of the regularization term with respect to the value of $\chi^2$. In the following, the covariance matrix $C_{\rm s}$ is always assumed to be a function of $\boldsymbol{g}_{\rm s}$, while specific regularization schemes are discussed in Section \ref{sec:reg}. The source property used for regularization (gradient, curvature, etc) is assumed to be distributed normally, i.e. a quadratic form in equation (\ref{eq:penalty_source}), similarly to the $\chi^2$ term, guaranteeing that the source for which $\nabla_{\rm s} G = 0$ minimizes the penalty function \citep{Suyu2006}. Using this condition, after some basic algebraic manipulations, we get: \begin{equation} \label{eq:min_source} (M^T C^{-1}_{\rm d} M + \lambda_{\rm s} C^{-1}_{\rm s}) \boldsymbol{s} = M^T C^{-1}_{\rm d} \boldsymbol{d}, \end{equation} where the matrix $M^T C^{-1}_{\rm d} M + \lambda_s C^{-1}_{\rm s}$ is now positive-definite and can be inverted using standard techniques. The source that minimizes the penalty function is found in this way for each set of fixed $\boldsymbol{\eta}$, $\boldsymbol{g}_{\rm s}$, and $\lambda_{\rm s}$. This solution implicitly assumes that the Gaussian random field describing the source has a zero mean. Although this is not formally correct because of the finite dimensions of the source grid, this offset is in general easily absorbed by the shape of the covariance matrix, and as our tests later will show, this assumption holds to very good approximation. Often, masking the data is required to isolate and model only the lensed image features. This can be achieved by an operator $S$, acting on the image plane and excluding all the pixels outside the mask from the modelling, which is simply a diagonal matrix with values of 1 or 0 for included and excluded pixels, respectively. In equations (\ref{eq:penalty_source}) and (\ref{eq:min_source}), this can be incorporated into a ``masked'' covariance matrix $S^T C^{-1}_{\rm d} S$, all rest being the same. In the remaining treatment, $C^{-1}_{\rm d}$ and $S^T C^{-1}_{\rm d} S$ can be used interchangeably. Reformulating the problem to include the potential perturbations is straightforward due to the similarity of equations (\ref{eq:cast_source}) and (\ref{eq:source_dpsi_combined}). As before, in general equation (\ref{eq:source_dpsi_combined}) cannot be directly inverted and we have to proceed by minimizing some penalty function. Here we define such a function similarly to equation (\ref{eq:penalty_source}), including an additional regularization term for the potential perturbations in the same way as for the source: \begin{equation} \label{eq:penalty_source_dpsi} \begin{split} G(\boldsymbol{r}) \equiv & \, G(\boldsymbol{r}|\boldsymbol{d},\boldsymbol{s}_{\rm p},\psi_{\rm p},\boldsymbol{g}_{\rm s},\lambda_{\rm s},\boldsymbol{g}_{\rm \delta\psi},\lambda_{\rm \delta\psi}) \\ = & \, \frac{1}{2}(M_{\rm r} \boldsymbol{r} - \boldsymbol{d})^T C^{-1}_{\rm d} (M_{\rm r} \boldsymbol{r} - \boldsymbol{d}) + \frac{1}{2}\boldsymbol{r}^T R \, \boldsymbol{r}, \end{split} \end{equation} where: \begin{equation} \label{eq:combined_reg} R = \begin{pmatrix} \lambda_{\rm s} C^{-1}_{\rm s} & 0 \\ 0 & \lambda_{\rm \delta\psi} C^{-1}_{\rm \delta\psi} \\ \end{pmatrix}. \end{equation} We underline again the important difference with equation (\ref{eq:penalty_source}), which is the dependence on a previously known source, $\boldsymbol{s}_{\rm p}$ (through $M_{\rm r}$). This equation has the same form as equation (\ref{eq:penalty_source}), and the condition $\nabla_{\rm r} G = 0$ leads to: \begin{equation} \label{eq:min_r} (M_{\rm r}^T C^{-1}_d M_{\rm r} + R) \boldsymbol{r} = M_{\rm r}^T C^{-1}_{d} \boldsymbol{d}, \end{equation} which can be solved for $\boldsymbol{r}$ by inverting the positive-definite matrix on the left hand side. \subsection{Bayesian framework} The number of free parameters involved in the lens potential and source reconstruction may vary between different models. For example, one may choose different parametric models for the smooth mass distribution, with or without additional perturbations, and regularization schemes (see Section \ref{sec:reg}). As in \citet{Suyu2006} and \citet{Vegetti2009a}, we use a Bayesian approach to quantitatively justify the inclusion of extra free parameters and compare models to find the one most consistent with the data - assuming all quantities are Gaussian processes. By recasting the problem in Bayesian terms, the evidence term necessary to compare models can be computed. In addition, the solutions for the source and the perturbations obtained in the previous section, which minimize the penalty function, coincide with the most probable solutions that maximize the posterior probability. A similar treatment is followed in \cite{Suyu2006} and \cite{Vegetti2009a}, however, here we explicitly derive the expression for the evidence. Bayes' theorem states that the posterior probability density of the source and potential perturbations given the data, lensing operator, and some form of prior (regularization) described by parameters $\boldsymbol{g}$ and $\lambda$ is: \begin{equation} \label{eq:bayes} \begin{split} P(\boldsymbol{r}) \equiv & \, P(\boldsymbol{r}|\boldsymbol{d},\boldsymbol{\eta},\boldsymbol{g}_{\rm s},\boldsymbol{g}_{\rm \delta\psi},\lambda_{\rm s},\lambda_{\rm \delta\psi}) \\ = & \, \frac{ P(\boldsymbol{d}|\boldsymbol{r},\boldsymbol{\eta}) \; P(\boldsymbol{s}|\boldsymbol{g}_{\rm s},\lambda_{\rm s}) \; P(\delta\boldsymbol{\psi}|\boldsymbol{g}_{\rm \delta\psi},\lambda_{\rm \delta\psi}) }{ \cal{E}(\boldsymbol{d}|\boldsymbol{\eta},\boldsymbol{g}_{\rm s},\boldsymbol{g}_{\rm \delta\psi},\lambda_{\rm s},\lambda_{\rm \delta\psi}) }, \end{split} \end{equation} where the numerator terms are the likelihood, source prior, and potential perturbations prior respectively, and the denominator is the evidence. Assuming the likelihood and priors are normal distributions and associating them with the previously introduced $\chi^2$ and regularization terms, their individual probability densities can be written as: \begin{align} \label{eq:prob_densities} P(\boldsymbol{d}|\boldsymbol{r},\boldsymbol{\eta}) = \, &\frac{1}{Z_{\rm d}}\exp[-\frac{1}{2}(M_{\rm r} \boldsymbol{r} - \boldsymbol{d})^T C^{-1}_{\rm d} (M_{\rm r} \boldsymbol{r} - \boldsymbol{d})], \nonumber \\ P(\boldsymbol{s}|\boldsymbol{g}_{\rm s},\lambda_{\rm s}) = \, & \frac{1}{Z_{\rm s}}\exp[-\frac{1}{2}\lambda_{\rm s} \boldsymbol{s}^T C^{-1}_{\rm s} \boldsymbol{s}], \nonumber \\ P(\delta\boldsymbol{\psi}|\boldsymbol{g}_{\rm \delta\psi},\lambda_{\rm \delta\psi}) = \, & \frac{1}{Z_{\rm \delta\psi}}\exp[-\frac{1}{2}\lambda_{\rm \delta\psi} \delta\boldsymbol{\psi}^T C^{-1}_{\rm \delta\psi} \delta\boldsymbol{\psi}], \end{align} where the normalization factors are given by: \begin{align} \label{eq:prob_density_norms} Z_{\rm d} = \, & (2\pi)^{N_{\rm d} / 2} (\mathrm{det} C_{\rm d} )^{1/2}, \nonumber \\ Z_{\rm s}(\boldsymbol{g}_{\rm s},\lambda_{\rm s}) = \, & (\frac{2\pi}{\lambda_{\rm s}})^{N_{\rm s} / 2} (\mathrm{det} C_{\rm s} )^{1/2}, \nonumber \\ Z_{\rm \delta\psi}(\boldsymbol{g}_{\rm \delta\psi},\lambda_{\rm \delta\psi}) = \, & (\frac{2\pi}{\lambda_{\rm \delta\psi}})^{N_{\rm \delta\psi} / 2} (\mathrm{det} C_{\rm \delta\psi} )^{1/2}. \end{align} The above set of equations assumes that we already have a decent solution for the source, $\boldsymbol{s}_{\rm p}$, in order to derive $M$ (see equation \ref{eq:combined_M}), which could come, for example, by solving the smooth version of the problem \citep[see][]{Koopmans2005}. The most probable solution - the one that maximizes the posterior probability - can be derived by requiring $\nabla_{\rm r} P = 0$ in equation (\ref{eq:bayes}), and it can be calculated independently of the evidence term (a constant factor in this case). This is the solution that also minimizes the penalty function in equation (\ref{eq:penalty_source_dpsi}), which has already been given in equation (\ref{eq:min_r}). The posterior in equation (\ref{eq:bayes}) is the product of equations (\ref{eq:prob_densities}), hence it is itself a normal distribution and can be written as: \begin{equation} \label{eq:post_gauss} P(\boldsymbol{r}) = \frac{1}{Z_{\rm G}} \exp[-G(\boldsymbol{r})], \end{equation} where $G(\boldsymbol{r})$ is given in equation (\ref{eq:penalty_source_dpsi}). Taking a Taylor expansion of $G$ around the most probable solution $\boldsymbol{r}_{\rm MP}$, which satisfies $\nabla_{\rm r} G = 0$ (equation \ref{eq:min_r}), we get: \begin{equation} \label{eq:taylor_g} G(\boldsymbol{r}) = G(\boldsymbol{r}_{\rm MP}) + \frac{1}{2} (\boldsymbol{r} - \boldsymbol{r}_{\rm MP})^T H \, (\boldsymbol{r} - \boldsymbol{r}_{\rm MP}), \end{equation} where $H$ is the Hessian of $G$: \begin{equation} \label{eq:hessian} H \equiv \nabla_{\rm r}^{2} G = M_{\rm r}^T C^{-1}_{\rm d} \, M_{\rm r} + R. \end{equation} Equation (\ref{eq:taylor_g}) is in fact exact - assuming we already know $\boldsymbol{s}_{\rm p}$ - because all terms in equation (\ref{eq:penalty_source_dpsi}) are quadratic in $\boldsymbol{r}$. Equation (\ref{eq:post_gauss}) can now be rewritten as: \begin{equation} \label{eq:post_gauss_rewritten} P(\boldsymbol{r}) = \frac{1}{Z_{\rm G}} \exp[-G(\boldsymbol{r}_{\rm MP}) -\frac{1}{2}(\boldsymbol{r} - \boldsymbol{r}_{\rm MP})^T H \, (\boldsymbol{r} - \boldsymbol{r}_{\rm MP})], \end{equation} where: \begin{equation} \label{eq:post_gauss_norm} \begin{split} Z_{\rm G} \equiv & \, Z_{\rm G}(\boldsymbol{\eta},\boldsymbol{g}_{\rm s},\lambda_{\rm s},\boldsymbol{g}_{\rm \delta\psi},\lambda_{\rm \delta\psi}) \\ = & \, e^{-G(\boldsymbol{r}_{\rm MP})} (2\pi)^{(\mathrm{N}_{\rm s}+\mathrm{N}_{\rm \delta\psi})/2} (\mathrm{det} H)^{-1/2}. \end{split} \end{equation} Combining equations (\ref{eq:penalty_source_dpsi}), (\ref{eq:prob_densities}), (\ref{eq:taylor_g}), and (\ref{eq:post_gauss_rewritten}) the evidence term from equation (\ref{eq:bayes}) can be computed for the most probable solution $\boldsymbol{r}_{\rm MP}$: \begin{equation} \label{eq:evidence_first} \mathcal{E}(\boldsymbol{d}|\boldsymbol{\eta},\boldsymbol{g}_{\rm s},\boldsymbol{g}_{\rm \delta\psi},\lambda_{\rm s},\lambda_{\rm \delta\psi}) = \frac{Z_{\rm G}(\boldsymbol{\eta},\boldsymbol{g}_{\rm s},\lambda_{\rm s},\boldsymbol{g}_{\rm \delta\psi},\lambda_{\rm \delta\psi})} {Z_{\rm d} Z_{\rm s}(\boldsymbol{g}_{\rm s},\lambda_{\rm s}) Z_{\rm \delta\psi}(\boldsymbol{g}_{\rm \delta\psi},\lambda_{\rm \delta\psi})}. \end{equation} Substituting the normalization factors from equations (\ref{eq:prob_density_norms}) and (\ref{eq:post_gauss_norm}), and taking the logarithm of the evidence we get: \begin{align} \label{eq:evidence} \log \mathcal{E} =& \, - \frac{\mathrm{N}_{\rm d}}{2} \log (2\pi) + \frac{\mathrm{N}_{\rm s}}{2} \log (\lambda_{\rm s}) + \frac{\mathrm{N}_{\rm \delta\psi}}{2} \log (\lambda_{\rm \delta\psi}) \nonumber \\ & \, - \frac{1}{2} \log (\det C_{\rm d}) - \frac{1}{2} \log (\det C_{\rm s}) - \frac{1}{2} \log (\det C_{\rm \delta\psi}) \nonumber \\ & \, - G(\boldsymbol{r}_{\rm MP}) - \frac{1}{2} \log (\det H). \end{align} Computing and comparing this value for models with different sets of parameters $\boldsymbol{\eta}$, $\boldsymbol{g}_{\rm s}$, and $\boldsymbol{g}_{\rm \delta\psi}$ allows one to rank the different mass models and regularization schemes, finding the combination most consistent with the data \citep{MacKay2003}. \subsection{Regularization schemes} \label{sec:reg} Adding regularization terms to the penalty function (equation \ref{eq:penalty_source_dpsi}), or equivalently using priors in the posterior probability density (equation \ref{eq:bayes}), is necessary in order to find a solution for the source and the potential perturbations by inverting the matrices in equations (\ref{eq:min_source}) and (\ref{eq:min_r}). Quadratic terms (Gaussian priors), such as the ones used here, as opposed to other forms of regularization\footnote{\citet{Wayth2005a} used maximum entropy regularization that has the benefit to prevent negative values for the source at the cost of not having quadratic penalty functions anymore. The solution minimizing the penalty function has to found numerically at a higher computational cost.}, have the advantage of leading to linear equations that have exact and efficient to calculate analytic solutions (equations \ref{eq:min_source} and \ref{eq:min_r}), and put the problem in the context of Gaussian Process Regression \citep{Rasmussen2006}. The effect of the regularization on the source and perturbation fields is captured in the detailed structure of the generic covariance matrices $C_{\rm s}$ and $C_{\rm \delta\psi}$, while the overall contribution to the penalty function (posterior probability) is moderated by the $\lambda_{\rm s}$ and $\lambda_{\rm \delta\psi}$ parameters. Here we examine different physically motivated forms of the covariance matrices $C_{\rm s}$ and $C_{\rm \delta\psi}$, and because the treatment is the same for both source and perturbations, we simply use $C$ and $\lambda$ in the following. The usual forms of regularization impose some sort of ``smoothness'' condition on the solution \citep[see][]{Press1992}. Choices in the literature are based on source derivatives of some order \citep[e.g.][]{Dye2005,Suyu2006,Vegetti2009a,Tagore2014,Nightingale2015,Yildirim2020}. For example, a zero-th order derivative of the source \citep[the usual Tikhonov regularization, or ridge regression,][]{Tikhonov1963} means that $C$ is the identity matrix and brightness values are derived from a normal distribution centered on zero with standard deviation $\lambda^{-1/2}$. Similarly, the gradient and curvature regularizations constrain the corresponding source derivatives, imposing a varying degree of smoothness to the solution. However, although such schemes are useful to find a solution to the problem, they are not physically motivated (there is no reason for the gradient or curvature of a galaxy's brightness profile to follow a normal distribution centered at zero or any other value), may introduce spurious properties to the solutions, and cause degeneracies between the source and the lens potential. In other words, the assumed covariance matrix resulting from these choices imposes a correlation function (or power spectrum) on the source or potential perturbations that might not reflect reality. A more realistic and general approach can involve covariance matrices $C$ that do not correspond to any particular derivative and impose a physically motivated structure, via its covariance, directly on the solutions. Here we examine two forms of such covariance kernels: \begin{align} \label{eq:covariance_exp} C(\boldsymbol{y}_{\rm i},\boldsymbol{y}_{\rm j},l) =& \exp \left( -\frac{d_{\rm i,j}}{l} \right), & \mathrm{(exponential)} \\ \label{eq:covariance_exp_squared} C(\boldsymbol{y}_{\rm i},\boldsymbol{y}_{\rm j},l) =& \exp \left( -\frac{d_{\rm i,j}^2}{2 \, l^2} \right), & \mathrm{(Gaussian)} \end{align} where $\boldsymbol{y}$ are the source pixel coordinates, $d_{\rm i,j}$ the Euclidean distance between pixels i and j, and $l$ is the characteristic correlation length of the kernels \citep{Rasmussen2006}. These two choices (also known as Ornstein-Uhlenbeck and squared exponential kernels respectively) constitute two opposite extremes of the more general Mat\'{e}rn kernel \citep[e.g.][]{Mertens2017}, and have a single free parameter, $l$ (which belongs to the $\boldsymbol{g}$ set of non-linear regularization parameters), which gives more freedom for additional structure in the covariance matrices $C$ beyond the fixed-form derivative-based regularization. Also, these covariance kernels appear in better agreement with various sources, as it will be shown by two examples later on. The variance level (i.e. the diagonal of the covariance matrix) is set by $\lambda$, and hence we assume here the diagonal values of of $C$ are by definition equal to unity. The potential perturbations given in equation (\ref{eq:min_r}) provide a measure of sub-galactic scale mass density fluctuations. The covariance matrix $C_{\rm \delta\psi}$ is equivalent to the correlation function (or two-point correlation function), which is related to the power spectrum via the Fourier transform. Hence, measuring the covariance of $\delta\boldsymbol{\psi}$ can probe the sub-galactic matter power spectrum. Although there have been theoretical and applied studies on this connection \citep[][]{Hezaveh2016a,DiazRivero2018,Chatterjee2018,Bayer2018}, this work is the first consistent approach using the gravitational imaging technique. The derived power spectrum/covariance of $\delta\boldsymbol{\psi}$ can then be associated to higher order moments in the lens mass distribution \citep[e.g.][]{Hsueh2017,Gilman2018} or dark matter substructure \citep[e.g.][]{Hezaveh2016a}. For the latter, a more realistic approach to disentangle the effect of baryons would be to compare the observed sub-galactic scale perturbations to predictions from hydrodynamical simulations \citep[e.g.][]{Vogelsberger2014,Schaye2015}. \subsection{Optimization} \label{sec:optimization} There are three main components in our approach to modelling gravitational lenses: the parametrized smooth lens potential, $\psi$, the grid-based potential perturbations, $\delta\boldsymbol{\psi}$, and the grid-based source brightness, $\boldsymbol{s}$. The task is to find the linear solutions and non-linear parameter values that are the most consistent with the data, i.e. maximizing the evidence. The linear part of the problem provides an exact solution for $\boldsymbol{s}$ and $\delta\boldsymbol{\psi}$ - assuming that we already know $\boldsymbol{s}_{\rm p}$ - that minimizes the penalty function and maximizes the posterior (equation \ref{eq:min_r}), for fixed non-linear parameters. Here we describe our treatment of the non-linear parameters, namely, the smooth potential parameters $\boldsymbol{\eta}$, and the regularization parameters $\boldsymbol{g}$ and $\lambda$ for the source and the potential perturbations. Firstly, we emphasize that the lens potential is dominated by the smooth model and any resulting perturbations are required to be small in order for equation (\ref{eq:dpsi_residuals}) to be valid. This is also motivated by decent agreement between data and smooth models \citep[e.g.][]{Koopmans2009,Auger2010,Suyu2014,Oldham2018}, as well as evidence for lens perturbations \citep[e.g.][]{Vegetti2012,MacLeod2013,Nierenberg2014,Hezaveh2016b,Birrer2017}. Solving simultaneously for $\boldsymbol{\eta}$ and $\delta\boldsymbol{\psi}$, however, is degenerate\footnote{The $\delta\boldsymbol{\psi}$ can in principle mimic almost any potential $\psi(\boldsymbol{\eta})$ and hence only the sum of the total potential is relevant. In practice however, $\psi(\boldsymbol{\eta})$ is set by general processes of galaxy formation and phase-space mixing and is expected to be smooth, while $\delta\boldsymbol{\psi}$ describes any remaining structure such as sub-halos, streams, etc.} and very inefficient; if $\boldsymbol{\eta}$ is far from the truth then $\delta\boldsymbol{\psi}$ will try to make up for the correct sum of the smooth potential and the perturbations, leading away from a realistic solution. Hence, as a first step we optimize for the parameters $\boldsymbol{\eta}$ assuming $\delta\boldsymbol{\psi} = 0$, while simultaneously solving the linear equations for the source (equation \ref{eq:min_source}). The parameter space of $\boldsymbol{\eta}$ is explored using a nested-sampling approach \citep{Skilling2004}, which provides several benefits: it computes the evidence term in equation (\ref{eq:evidence}) with the $\delta\boldsymbol{\psi}=0$ assumption, finds the most probable parameters, provides confidence intervals, and explores a large part of the parameter space with a limited chance of getting stuck in local extrema. There is the additional option to start a Monte Carlo Markov Chain exploration of the parameter space near the most probable solution to obtain smoother posterior probability distributions. Once the smooth model is determined, the parameters $\boldsymbol{\eta}$ are kept fixed to their maximum a posteriori values and solving for $\boldsymbol{r}$ (the potential perturbations and the source) is conducted. The varying non-linear parameters are now the regularization parameters $\boldsymbol{g}$ and $\lambda$, together describing the source and perturbation covariance matrices. Although it is possible to solve approximately for the $\lambda$ parameters, at least in the case with $\delta\boldsymbol{\psi}=0$ \citep{Koopmans2005,Suyu2006}, here we incorporate them in the full non-linear treatment. This allows one to infer confidence intervals and, most importantly, degeneracies between the source and the potential solutions. The perturbations investigated here are assumed to be small and originate from an extended field of mass density fluctuations permeating the lens, as opposed to specific and localized massive substructures, such as dark sub-halos. For such prominent and confined perturbers, an iterative approach\footnote{At the end of each iteration, the lens potential is updated by adding the newly determined $\delta\boldsymbol{\psi}$ and the $D_{\rm s}(\boldsymbol{s_{\rm p}})$ matrix in equation (\ref{eq:combined_M}) is recalculated based on the derivatives of the newly determined source.} would indeed be expected to perform better in locating and measuring the mass of putative massive substructures, carefully controlling the regularization parameters in the process \citep[e.g.][]{Suyu2006,Vegetti2009a,Nightingale2015,Hezaveh2016b}. In this work, however, we do not assume any restriction on the regularization parameters and solve for $\boldsymbol{r}$ for each set of sampled non-linear parameters without updating the lens potential and the source. This approach is sufficient to capture the statistical properties of the perturbations field, provided that its amplitude is small. Mixing the two approaches, i.e. sampling the regularization parameters and then iterating up to a given number of steps for each combination, could be another possibility, especially when the extended field of perturbations also includes prominent mass concentrations such as sub-halos, but this is out of the scope of this paper. \begin{figure*} \includegraphics[width=\textwidth]{results_smooth_images.pdf} \caption{Lensed images (top), source (middle), and residuals (bottom) for the mock data (truth) and the reconstructions with different source regularization kernels: identity, curvature, exponential, and Gaussian. The source brightness, shown as Voronoi cells of an adaptive grid (see Section \ref{sec:grids}), has been reconstructed using $n=3$. The corresponding parameters for the lens potential and the source regularization are shown in Table \ref{tab:table_map}.} \label{fig:results_smooth_images} \end{figure*} \section{Results} \label{sec:results} In order to demonstrate the capabilities of our method, we examine modelling aspects of mock lenses combining smooth and complex lens potentials and source light profiles. In all cases, we use a point spread function (PSF) simulated for the Hubble Space Telescope (HST) using the \textsc{tiny-tim}\footnote{\url{http://www.stsci.edu/hst/observatory/focus/TinyTim}} software \citep{Krist2010}, which is assumed to be the same in creating and modelling the mock data, uniform Gaussian random noise with a signal-to-noise ratio of $\approx$40 at peak brightness, and a mask to exclude regions of the image without lensing features (also the central part of the image that may hold residuals after removing the lens galaxy light, which we do not include or model). We generated the mocks using the MOLET\footnote{\url{https://github.com/gvernard/molet}} software package \citep{Vernardos2021}. The smooth parametric model used for the lens potential is a Singular Isothermal Ellipsoid \citep[SIE,][]{Kassiola1993,Kormann1994}. We follow the notation of \citet{Schneider2006}, with convergence given by: \begin{equation} \label{eq:kappa_sie} \kappa(\omega) = \frac{b}{2 \omega}, \end{equation} where $\omega = \sqrt{q^2 x^2 + y^2}$, $q$ is the minor to major axis ratio, and $b$ (in arcsec) describes the overall potential strength\footnote{We set $b=\sqrt{q} \, \theta_{\mathrm{Ein}}$, where the Einstein radius, $\theta_{\mathrm{Ein}}$, is defined as the radius within which the integral of equation (\ref{eq:kappa_sie}) becomes equal to unity.}. This relation holds in the reference system whose x-axis is aligned with the ellipsoid's major axis, rotated by the position angle, $\theta$, and whose origin coincides with the lens center ($x_0,y_0$). External shear with magnitude $\gamma$ and direction $\phi$ is included, leading to 7 free parameters in total, hereafter denoted as $\boldsymbol{\eta}$. All angles are measured east-of-north, in order to remain consistent with the standard celestial definition. \renewcommand{\arraystretch}{1.2} \begin{table*} \caption{Values of the lens potential ($\boldsymbol{\eta}$) and source regularization parameters ($\lambda_{\rm s},\boldsymbol{g}_{\rm s}$) that maximize the posterior probability density, i.e. Maximum A Posteriori (MAP) values, and corresponding terms from equation (\ref{eq:evidence}). The smooth source (top part of the table) is described in Section \ref{sec:model_smooth} and the lensed images corresponding to the parameters listed here are shown in Fig. \ref{fig:results_smooth_images}. Similarly, NGC3982 and NGC2623 (middle and bottom parts) are described in Section \ref{sec:smooth_complex} and shown in Figs. \ref{fig:results_spiral_images}, and \ref{fig:results_merger_images}.} \label{tab:table_map} \begin{threeparttable} \begin{tabular}{rrr@{\hskip 0.8cm}rrrr@{\hskip 0.8cm}rrrr@{\hskip 0.8cm}rrrr} \input{joined_table_map_vertical.tex} \end{tabular} \begin{tablenotes}\footnotesize \item [$\dagger$] constant \end{tablenotes} \end{threeparttable} \end{table*} \renewcommand{\arraystretch}{1.4} \begin{table*} \centering \caption{Mean values and 68 per cent confidence intervals for the lens potential ($\boldsymbol{\eta}$) and source regularization parameters ($\lambda_{\rm s},\boldsymbol{g}_{\rm s}$), and corresponding evidence values. The smooth source (top part of the table) is described in Section \ref{sec:model_smooth}, while NGC3982 and NGC2623 (middle and bottom parts) are described in Section \ref{sec:smooth_complex}. The lens center appears shifted by about half a pixel in the x and y directions due to a corresponding shift in the PSF. The full probability densities for the Gaussian regularization model of the smooth source (top part of the table) are shown in Fig. \ref{fig:results_smooth_corner}.} \label{tab:table_mean} \begin{tabular}{rrrrrrrr} \input{joined_table_mean.tex} \end{tabular} \end{table*} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{ps_smooth.pdf} \caption{Fourier power spectrum of the model residuals shown in the bottom row of Fig. \ref{fig:results_smooth_images}.} \label{fig:results_smooth_res_ps} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{results_smooth_corner.pdf} \caption{Marginalized probability densities and histograms for the lens potential ($\boldsymbol{\eta}$) and regularization ($\lambda_{\rm s},\boldsymbol{g}_{\rm s}$) parameters for the Gaussian kernel reconstruction of the smooth source described in Section \ref{sec:model_smooth}. The parameter ranges are set to match Fig. \ref{fig:results_combined_corner} and facilitate comparisons with the results described in Sections \ref{sec:model_dpsi} and \ref{sec:both} - a zoomed-in version of this plot that shows the shape of the two-dimensional distributions better is shown in Fig. \ref{fig:app_smooth_corner}. The true values of the smooth potential parameters ($\boldsymbol{\eta}$) are indicated by the vertical and horizontal black lines and the points (squares). Contours are drawn at the 68 and 95 per cent confidence intervals. The corresponding mean values and 68 per cent confidence intervals are listed in Table \ref{tab:table_mean}.} \label{fig:results_smooth_corner} \end{figure*} \subsection{Smooth lens and smooth source} \label{sec:model_smooth} A simulated lens system is created with a single massive lensing galaxy having $(b,q,\theta,x_0,y_0,\gamma,\phi) = (0.9,0.8,-135^\circ,0,0,0.03,-40^\circ)$. The source brightness distribution consists of two Gaussian components: the first is located at $x,y = (-0.05,0.05)$ arcsec on the source plane, has an axis ratio of $0.6$, position angle of $-70^\circ$, and standard deviation on the $x$ axis of $\sigma_{\rm x} = 0.1$ arcsec, while the second component is at $x,y = (-0.4,0.25)$ arcsec and has $\sigma_{\rm x} = \sigma_{\rm y}= 0.1$ arcsec (circular). The two components are scaled to have a peak brightness ratio of 0.7, with the first one being the brighter. The data is simulated on a square 3.5-arcsec 80-pixel field of view, having a pixel size somewhat bigger than 0.04 arcsec. The corresponding source and resulting lensed images are shown in the left column in Fig.~\ref{fig:results_smooth_images}. We model the system as a purely parametric smooth lens, without including any grid-based correction to the potential, using $n=3$ for constructing the adaptive source plane grid (selecting 1 out of every $n\times n$ pixels). In addition to the lens potential parameters, the set of non-linear free parameters includes the regularization of the source, i.e. $\lambda_{\rm s}$ and $\boldsymbol{g}_{\rm s}$. We use four different source regularization schemes with different associated parameters: identity ($\lambda_{\rm s}$), curvature ($\lambda_{\rm s}$), exponential kernel ($\lambda_{\rm s}$, $l_{\rm s}$), and Gaussian kernel ($\lambda_{\rm s}$, $l_{\rm s}$). The covariances between source pixels for the latter two schemes are given by equations (\ref{eq:covariance_exp}) and (\ref{eq:covariance_exp_squared}) respectively; we note that the $l_{\rm s}$ are different parameters in these two cases, indicating the length where the correlation drops to roughly half its maximum. The value of the regularization parameter, $\lambda_{\rm s}$, sets the overall level of regularization and is inversely proportional to the source variance, e.g. smaller values allow for more freedom in the source model. This parameter is expected to vary between different schemes because of the fundamentally different covariance matrices and cannot be straightforwardly compared. Instead, one can compare the evidence values to determine which choice of regularization is more justified by the data. We use the alternative curvature definition for adaptive grids provided in \citet{Vegetti2009a}, which has a fixed regularization pattern/correlation length for a given grid. In this case, if $H$ is a matrix holding the numerical coefficients for the local curvature of the source then $C_{\rm s} = (H^T H)^{-1}$. Fig. \ref{fig:results_smooth_images} shows the reconstructed sources, lensed images, and residuals, and Table \ref{tab:table_map} (top) lists the Maximum A Posteriori (MAP) model parameters and the corresponding posterior probability terms from equation (\ref{eq:evidence}), for the four different regularization schemes. Table \ref{tab:table_mean} lists the mean parameter values, the 68 per cent confidence intervals, and the evidence, $\mathcal{E}$, for each model. The identity regularization corresponds to a covariance matrix that is the identity matrix, which has a flat power spectrum\footnote{Or a delta function two-point correlation function, which is the inverse Fourier transform of the power spectrum (i.e. the Wiener-Khinchin theorem).} that allows the solution to vary wildly, similarly to white noise, resulting in an unrealistically grainy source. Despite having the lowest likelihood (i.e. $\chi^2$ term in Table \ref{tab:table_map}), and thus the lowest residuals\footnote{These residuals result from $n=3$ for the adaptive grid and are expected to be reduced by increasing the number of pixels used to describe the source, i.e. $n=2$ or $n=1$.} as shown in Fig. \ref{fig:results_smooth_res_ps}, the identity regularization also has the lowest evidence value. All other three regularization schemes perform better in recovering the source and the model parameters and give considerably higher evidence values. However, the Gaussian kernel is decisively preferred over the curvature and exponential kernels, having a Bayes factor of $\log_{\rm 10} K= 7.98$ and $17.85$ respectively \citep[][assuming all models have the same prior probability]{Jeffreys1998}. Although this is not the best possible kernel, it is still a sufficient approximation to describe the source brightness \citep[see fig. 3 of][]{Vernardos2020}. As a final note, it can be seen that in all cases there is some overfitting, most prominently for the identity regularization, that suppresses the noise in the large scales ($k<5$ in Fig. \ref{fig:results_smooth_res_ps}). This can also be seen in the reconstructed sources in Fig. \ref{fig:results_smooth_images}, where the adaptive grid voronoi cells become noisy and don't drop to zero as we move away from the brightest pixels. \begin{figure*} \centering \includegraphics[width=\textwidth]{results_spiral_images.pdf} \caption{Same as Fig. \ref{fig:results_smooth_images} for NGC3982. \label{fig:results_spiral_images}} \vspace{0.5cm} \includegraphics[width=\textwidth]{results_merger_images.pdf} \caption{Same as Fig. \ref{fig:results_smooth_images} for NGC2623. \label{fig:results_merger_images}} \end{figure*} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{two_point_all.pdf} \caption{Radially averaged two-point correlation functions for the unlensed (HST-observed) images of NGC3982 and NGC2623 (triangles and squares respectively) and their corresponding reconstructions (solid lines) and priors (dashed lines). The $l_{\mathrm{\delta\psi}}$ parameter for the exponential and Gaussian priors changes in each panel as indicated. We have assumed the angular size of the unlensed sources to be $\approx$1 arcsec, therefore the values on the horizontal axis are scaled accordingly. The functions have been normalized to unity to factor out the effect of $\lambda_{\rm s}$ and the pixel resolution. Top: the exponential and Gaussian theoretical covariance kernels from equations (\ref{eq:covariance_exp}) and (\ref{eq:covariance_exp_squared}) are shown for values of $l_{\rm s}$ selected to visually match the data. Middle and bottom: the $l_{\rm s}$ parameters for the Exponential and Gaussian covariance kernel priors are set to their MAP values (see Table \ref{tab:table_map}). The two-point correlation function of an identity regularization prior would be a delta function centered at zero. \label{fig:spiral_merger_two_point} } \end{figure} The full non-linear parameter probability densities for the reconstruction with the Gaussian kernel are shown in Fig. \ref{fig:results_smooth_corner}. The Nested Sampling method \citep{Skilling2004}, whose MultiNest implementation \citep{Feroz2009} is used here, is designed to compute the Bayesian evidence but can still sample the probability distributions at their peak almost as well as a MCMC algorithm. However, if such a method is chosen from start it would neither guarantee convergence to the global maximum nor compute the evidence (or be extremely inefficient in doing so). The recovered probability distributions for the lens model parameters $b,q,\theta,\gamma,$ and $\phi$ contain the true values within confidence intervals of 1 to 2 $\sigma$. The lens center is systematically offset by approx. half a pixel in the negative x and positive y directions, which is due to a corresponding shift in the PSF's brightest pixel. There are no degeneracies observed between the parameters, other than the expected $b-q$ correlation from equation \ref{eq:kappa_sie} and those between $b-\gamma$, $q-\gamma$, and $\theta-\phi$, which reflect the known degeneracy between the strength and orientation of the SIE and the external shear \citep[e.g. see part 2 of][]{Schneider2006}. The joint probability distribution of $\lambda_{\rm s}$ and $l_{\rm s}$ allows for useful conclusions on the behaviour of the source regularization. Here, there is a very weak anti-correlation between $\lambda_{\rm s}$ and $l_{\rm s}$, which is somewhat expected: increasing the overall regularization parameter $\lambda_{\rm s}$ smooths out the reconstructed source, as does increasing the correlation length $l_{\rm s}$ in the covariance kernel. This anti-correlation will become more prominent in the following, but it is worth pointing it out already at this smooth example. Such information will be increasingly helpful in quantifying the degree of degeneracy between more complex sources and perturbed lens potentials in subsequent examples. \subsection{Smooth lens and complex source} \label{sec:smooth_complex} Setting the lens potential to the same smooth parametric model as before, we now change the source brightness profile to more realistic ones taken from observed galaxies. We use high resolution HST archival observations of NGC3982 (a spiral galaxy) and NGC2623 (a merger) taken with the ACS instrument, selected to represent a wider range of possible strongly lensed sources. We scale the source angular size arbitrarily to around 1 arcsec, roughly the same as for the analytic source used in the previous section. The HST images are scaled down dramatically in size and are significantly oversampled compared to the sampling of the final mock data. We take this sub-pixel structure into account by heavily oversampling the mock data by a factor of 10, producing very high resolution lensed images, applying an oversampled PSF, and finally averaging to the final pixel scale: the same square 3.5-arcsec 80-pixel field of view as before. The resulting mock lensed images are shown in Figs. \ref{fig:results_spiral_images} and \ref{fig:results_merger_images}. Fig. \ref{fig:spiral_merger_two_point} shows the two-point correlation function of the HST observations and indicates that the true underlying covariance properties of these two objects can in principle be captured well by the exponential and Gaussian kernel regularization schemes respectively. Using these schemes in solving equation (\ref{eq:min_source}) imposes a realistic prior on the reconstructed source that is motivated by real observations, as opposed to, for example, curvature regularization, which implicitly imposes a correlation that is unlikely to match the truth. \begin{figure} \includegraphics[width=0.45\textwidth]{ps_both.pdf} \caption{Fourier power spectrum of the model residuals shown in the bottom rows of Figs. \ref{fig:results_spiral_images} (top panel) and \ref{fig:results_merger_images} (bottom panel).} \label{fig:spiral_merger_res_ps} \end{figure} We model the two systems exactly in the same way as in the previous section, i.e. using $n=3$ and the same four regularization schemes. The reconstructed sources, lensed images, and residuals are shown in Figs. \ref{fig:results_spiral_images} and \ref{fig:results_merger_images}, while the MAP and mean parameters and evidence terms are listed in Tables \ref{tab:table_map} and \ref{tab:table_mean}. In Fig. \ref{fig:spiral_merger_two_point} we compare the radially averaged two-point correlation functions of the unlensed (HST-observed) sources and their reconstructions with the priors imposed by the covariance matrix $C_{\rm s}$. Correlations imposed by curvature regularization have a fixed length (no free parameters) and are quite different from the truth: pixels that are far from each other are much more correlated than, for example, in the case of an exponential kernel, reflecting the implicit smoothness prior. This is a direct consequence of $C_{\rm s}$ being a quite dense matrix: if $H$ is a matrix holding the numerical coefficients for the local curvature of the source, then $C_{\rm s} = (H^T H)^{-1}$, and although $H$, $H^T$, and $H^T H$ are relatively sparse matrices, $(H^T H)^{-1}$ is not. However, the quality of the data is high enough to drive the solution close to the truth regardless of the regularization scheme/assumed prior - the two-point correlation functions for all the reconstructions lie on top of each other\footnote{The reconstructions become completely smooth and match almost perfectly the truth and the recovered covariance matrix if the reconstructed sources are interpolated from the adaptive Delaunay grid onto a regular grid with similar resolution.} in Fig. \ref{fig:spiral_merger_two_point}. Even the reconstruction using the least physically motivated identity regularization manages to recover the correct correlations of the source, suggesting that the solution is driven by the data and not the prior and therefore is not very degenerate. Nevertheless, the evidence values (see Table \ref{tab:table_mean}) are maximized by the correct regularization scheme in each case, viz. Gaussian for NGC3982 and exponential for NGC2623. Comparing the mean values and confidence intervals of the correlation length parameter, $l_{\rm s}$, to the truth, i.e. those obtained from the observed images (see Fig. \ref{fig:spiral_merger_two_point}), we find a good agreement for both cases, despite the MAP value for NGC2623 being quite low (see Fig. \ref{fig:spiral_merger_two_point}). \begin{figure*} \includegraphics[width=\textwidth]{results_perts_smooth_images.pdf} \caption{Same as Fig. \ref{fig:results_smooth_images}, with the addition of the true and reconstructed perturbations $\delta\boldsymbol{\psi}$ as described in Section \ref{sec:model_dpsi}. The bottom left panel shows the difference between the perturbed (top left panel) and unperturbed systems (top left panel of Fig. \ref{fig:results_smooth_images}).} \label{fig:results_perts_images} \end{figure*} Comparing the power spectra shown in Fig. \ref{fig:spiral_merger_res_ps}, we see that the identity regularization performs best, as is the case for the smooth source examined in Section \ref{sec:model_smooth}, which, however, is the result of overfitting. Curvature regularization produces residuals on the large scales (small wavenumber $k$), while the more physically motivated exponential and Gaussian regularizations result in the smallest residuals and at the same time avoid overfitting. Despite the successful modelling of the smooth lens potential and finding the correct source prior, there is still some unmodelled flux in the residuals (at SSE in the residuals in Fig. \ref{fig:results_spiral_images} and N in Fig. \ref{fig:results_merger_images}), which results from using $n=3$ to construct the adaptive grid, a value too high to account for the complex small scale source structure. Such residuals could erroneously be interpreted as spurious lens potential perturbations when modelling real data - this is examined more closely in Section \ref{sec:both}. \subsection{Modelling potential perturbations} \label{sec:model_dpsi} A lens potential fully described by a parametrized smooth lens model, as examined so far, might be an idealized real-world scenario. Therefore, in this section we introduce and model potential perturbations. We adopt the same smooth lens potential used in Sections \ref{sec:model_smooth} and \ref{sec:smooth_complex}, which we perturb using a Gaussian Random Field (GRF) of perturbations $\delta\boldsymbol{\psi}$. GRF perturbations are defined by their power spectrum, which, in this case, we assume to be a power law: \begin{equation} \label{eq:power_spectrum} P(k) = A \; k^{\beta}, \end{equation} where $A$ is the amplitude, associated to the variance of the zero-mean $\delta\boldsymbol{\psi}$ field \citep[for more details see][]{Chatterjee2018,Bayer2018,Chatterjee2019}, $\beta$ is the slope, and $k$ is the wavenumber of the Fourier harmonics. Regardless of our particular choice of GRF perturbations, the generality of the analysis presented here is not affected - in fact, any form of potential perturbations could be used and modelled. \begin{figure} \includegraphics[width=0.45\textwidth]{two_point_perts.pdf} \caption{Radially averaged two-point correlation functions of the true $\delta\boldsymbol{\psi}$ field (circles), the reconstructions from the models shown in Fig. \ref{fig:results_perts_images} (solid lines, see Section \ref{sec:model_dpsi} for details), and different $C_{\mathrm{\delta\psi}}$ priors (dashed lines). The priors for the $\Delta\Psi$-S and ALL models are Gaussian with the $l_{\rm \delta\psi}$ parameter MAP value indicated in the parentheses (see Table \ref{tab:table_perts_smooth_map}). The black dotted line is a Gaussian fit to the correlation function of the masked $\delta\boldsymbol{\psi}$ with $l_{\rm \delta\psi} = 0.36$ (using equation \ref{eq:covariance_exp_squared}). The grey dotted line is directly plotted from equation (\ref{eq:two_point_power_law}), i.e. not a fit, with $k_{\mathrm{max}}$ set to the diagonal of the 3.5 arcsec field of view.} \label{fig:results_perts_two_point} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{ps_dpsi_perts.pdf} \caption{Fourier power spectrum of the perturbations shown in the third row of Fig. \ref{fig:results_perts_images}. The dashed lines are fits using equation (\ref{eq:power_spectrum}) with the corresponding parameters listed in Table \ref{tab:perts_GRF_parameters}. The power spectra are computed within the mask.} \label{fig:results_perts_dpsi_ps} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{ps_res_perts.pdf} \caption{Fourier power spectrum of the model residuals shown at the bottom row of Fig. \ref{fig:results_perts_images}. The ``unmodelled'' residuals correspond to the bottom left panel of Fig. \ref{fig:results_perts_images} and quickly drop to the noise level for $k>4$.} \label{fig:results_perts_res_ps} \end{figure} \renewcommand{\arraystretch}{1.2} \begin{table*} \caption{MAP parameter values and corresponding probability terms (from equation \ref{eq:evidence}, same as Table \ref{tab:table_map}). Models $\Delta\Psi$, $\Delta\Psi$-S, CURVATURE, and ALL are described in Section \ref{sec:model_dpsi} and model FFF in \ref{sec:both}. Notice that the dimensions of the parameter space are not the same between the models.} \label{tab:table_perts_smooth_map} \begin{threeparttable} \begin{tabular}{rrrrrrrr} \input{table_pert_map.tex} \end{tabular} \begin{tablenotes}\footnotesize \item [$\dagger$] constant \end{tablenotes} \end{threeparttable} \end{table*} \renewcommand{\arraystretch}{1.4} \begin{table*} \centering \caption{Mean parameter values, 68 per cent confidence intervals, and evidence terms (same as Table \ref{tab:table_mean}). Models $\Delta\Psi$, $\Delta\Psi$-S, CURVATURE, and ALL are described in Section \ref{sec:model_dpsi} and model FFF in \ref{sec:both}. The full probability densities for models ALL and FFF are shown in Fig. \ref{fig:results_combined_corner}. Notice that although the dimensions of the parameter space differ between the models, this is taken into account while integrating to calculate the evidence. We do not compare model FFF to any other model, hence its evidence value is omitted.} \label{tab:table_perts_smooth_mean} \begin{tabular}{rrrrrrrr} \input{table_pert_mean.tex} \end{tabular} \end{table*} \begin{figure*} \includegraphics[width=\textwidth]{results_combined_corner.pdf} \caption{Same as Fig. \ref{fig:results_smooth_corner}, including the perturbation parameters $\lambda_{\rm \delta\psi}$,$\boldsymbol{g}_{\rm \delta\psi}$, for the ALL (blue) and FFF (red) models, described in Sections \ref{sec:model_dpsi} and \ref{sec:both} respectively. The two models are actually the same and have the same free parameters, i.e. the smooth potential and regularization parameters for the source and the potential perturbations, but applied to mock data with different source light profiles. The corresponding mean values and 68 per cent confidence intervals are given in Table \ref{tab:table_perts_smooth_mean}.} \label{fig:results_combined_corner} \end{figure*} We generate a single realization of $\delta\boldsymbol{\psi}$ from a GRF having $\log_{\rm 10} (A),\beta = (-7.8,-5.5)$, in the same $80\times80$ pixel grid as the mock image. Within the masked region of the field of view, the GRF field has slightly different $A$ and $\beta$ parameters (see Table \ref{tab:perts_GRF_parameters} and Fig. \ref{fig:results_perts_two_point}). The resulting perturbations vary in magnitude between roughly $\pm13$ per cent of the average smooth lens potential (within the mask). The source (the same as the one used in Section \ref{sec:model_smooth}), the perturbations, and the corresponding lensed image are shown on the left column in Fig. \ref{fig:results_perts_images}. The difference\footnote{We first subtract the perturbed and unperturbed mock lens images without any noise, and then add an artificial white noise realization with the same signal-to-noise ratio as the unperturbed case.} between the mock data with the purely smooth underlying lens model used in Section \ref{sec:model_smooth} (top left panel in Fig. \ref{fig:results_smooth_images}) and its perturbed version used here (top left panel in Fig. \ref{fig:results_perts_images}) is shown in Fig. \ref{fig:results_perts_images}, bottom left panel. An important and basic observation we need to make here is that in order to be able to reconstruct any perturbing $\delta\boldsymbol{\psi}$ there needs to be some lensed light locally around it. This can be understood by examining matrix $M_{\rm r}$ (equation \ref{eq:combined_M}), which extends the smooth lens modelling framework presented in Section \ref{sec:method} to include potential perturbations: if there is no source light (strictly speaking, if the source light is constant, i.e. its derivative is zero) then the terms $D_{\mathrm{s}}(\boldsymbol{s_{\rm p}})$ introduced in equation (\ref{eq:dpsi_residuals}), and consequently the entire perturbing part of $M_{\rm r}$, vanish. The $\delta\boldsymbol{\psi}$ are then reconstructed based mainly on the regularization prior. As a result, in general, the further a reconstructed $\delta\psi$ value is from pixels with some lensed light in them the less accurate its estimate based on the data becomes. In the following, we do not attempt to mitigate this and our reconstructed $\delta\boldsymbol{\psi}$ away from pixels with brightness should be viewed as an extrapolation regularized by the prior. A similar argument holds for the smooth potential as well. The covariance matrix of a GRF field is derived from its two-point correlation function, which is simply the inverse Fourier transform of its power spectrum. For a GRF with a power law power spectrum, like the one given in equation (\ref{eq:power_spectrum}), the two-point correlation function is: \begin{equation} \label{eq:two_point_power_law} \xi(r) = 2 \pi A J_{\mathrm{0}}(k_{\mathrm{max}}\, r) \, k_{\mathrm{max}}^{\beta+2}, \end{equation} where $J_{\mathrm{0}}$ is the zeroth order Bessel function of the first kind, and $k_{\mathrm{max}}$ the maximum wavenumber. However, the mask truncates the GRF and changes its covariance properties so that the above relation cannot be used to construct a regularization kernel anymore. In this case, the Gaussian kernel provides a sufficiently good approximation for the two-point correlation function, as shown in Fig. \ref{fig:results_perts_two_point}. To model the perturbed system, we use a Gaussian regularization kernel for both $\boldsymbol{s}$ and $\delta\boldsymbol{\psi}$, and $n=3$ for reconstructing the adaptive source grid. The size of the pixel grid to reconstruct the perturbations $\delta\boldsymbol{\psi}$ on and $n$ set the number of free parameters of any model and can be selected by maximizing the Bayesian evidence \citep{Vegetti2012}. However, this is outside the scope of this work - and a computationally very demanding task. We use a $30\times30$ pixel grid for $\delta\boldsymbol{\psi}$, which has enough resolution to capture the details of the true underlying GRF perturbations while still leading to tractable computations \citep[such a grid has been also used in the case of a single perturbing substructure, e.g.][]{Koopmans2005,Vegetti2012}. We model the perturbed lens in three different set-ups: i) we fix the smooth lens model to the truth and the source regularization parameters to the mean values of the Gaussian kernel model obtained in Section \ref{sec:model_smooth} (see Table \ref{tab:table_mean}) and we sample only $\lambda_{\rm \delta\psi}$,$\boldsymbol{g}_{\rm \delta\psi}$ (model $\Delta\Psi$), ii) we fix the smooth lens model to the truth and sample both $\lambda_{\rm s}$,$\boldsymbol{g}_{\rm s}$ and $\lambda_{\rm \delta\psi}$,$\boldsymbol{g}_{\rm \delta\psi}$ (model $\Delta\Psi$-S), and iii) we sample $\boldsymbol{\eta}$, $\lambda_{\rm s}$, $\boldsymbol{g}_{\rm s}$, $\lambda_{\rm \delta\psi}$, and $\boldsymbol{g}_{\rm \delta\psi}$ simultaneously (model ALL). Fig. \ref{fig:results_perts_images} shows the resulting lensed images, reconstructed $\boldsymbol{s}$ and $\delta\boldsymbol{\psi}$, and residuals, Table \ref{tab:table_perts_smooth_map} lists the MAP model parameters and the posterior probability terms from equation (\ref{eq:evidence}), and Table \ref{tab:table_perts_smooth_mean} lists the mean parameter values, their 68 per cent confidence intervals, and the evidence for each set-up. Models $\Delta\Psi$ and $\Delta\Psi$-S give almost identical results. Models $\Delta\Psi$-S and ALL recover a similar correlation length for the source, in very good agreement with the unperturbed case presented in Section \ref{sec:model_smooth} - this is also true for the parameters $\boldsymbol{\eta}$ recovered by the ALL model. The correlation length of the perturbations, $l_{\rm \delta\psi}$, has a very similar value for all the models; the values from $\Delta\Psi$ and $\Delta\Psi$-S and the corresponding covariance matrices, $C_{\rm \delta\psi}$, are in fact so close that their determinants differ by very little (see Table \ref{tab:table_perts_smooth_map}). \begin{figure*} \centering \includegraphics[width=\textwidth]{results_both_images_top.pdf} \caption{Same as Fig. \ref{fig:results_perts_images} for NGC2623. The bottom left panel shows the difference between the perturbed (top left panel) and unperturbed systems (top left panel of Fig. \ref{fig:results_merger_images}). We list the free parameters of each model in the parenthesis next to its name at the top (see Section \ref{sec:both} for details).} \label{fig:results_both_images} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{results_both_images_bottom.pdf} \contcaption{} \end{figure*} \begin{figure} \includegraphics[width=0.45\textwidth]{two_point_both.pdf} \caption{Radially averaged two-point correlation functions of the $\delta\boldsymbol{\psi}$ reconstructions from the FFF and FFF-MAP $n=2$ models (see Section \ref{sec:both}). We include the prior (dashed lines), with the $l_{\rm \delta\psi}$ parameter for the Gaussian covariance kernel set to its MAP value, i.e 0.285 (see Table \ref{tab:table_perts_smooth_map}). The true two-point correlation functions of the full GRF (grey circles) and the one within the mask (black cirles) are shown, together with a Gaussian fit to the latter with $l_{\rm \delta\psi} = 0.36$ (using equation \ref{eq:covariance_exp_squared}, dotted black line) and equation (\ref{eq:two_point_power_law}) with $k_{\mathrm{max}}$ set to the diagonal of the 3.5 arcsec -wide image (dotted grey line, not a fit).} \label{fig:results_both_two_point} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{ps_dpsi_both.pdf} \caption{Fourier power spectrum of some of the $\delta\boldsymbol{\psi}$ reconstructions shown in the third row of Fig. \ref{fig:results_both_images}. The dashed lines are fits using equation (\ref{eq:power_spectrum}) with the corresponding parameters listed in Table \ref{tab:perts_GRF_parameters}. The power spectra are computed within the mask.} \label{fig:results_both_dpsi_ps} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{ps_res_both.pdf} \caption{Fourier power spectrum of the residuals shown at the bottom row of Fig. \ref{fig:results_both_images} for the models used in Fig. \ref{fig:results_both_dpsi_ps}. The ``unmodelled'' residuals correspond to the bottom left panel of Fig. \ref{fig:results_both_images} and drop to the noise level for $k>7$.} \label{fig:results_both_res_ps} \end{figure} To further investigate the effect of the prior on the lens potential perturbations, we evaluate a model using curvature regularization for $\delta\boldsymbol{\psi}$. To do this, we fix $\boldsymbol{\eta}$ to their true values and sample $\lambda_{\rm s}$, $\boldsymbol{g}_{\rm s}$, and $\lambda_{\rm \delta\psi}$ (there are no $\boldsymbol{g}_{\rm \delta\psi}$ parameters in this case). First, we notice that the values of $\lambda_{\rm s}$ and $l_{\rm s}$ are almost identical with the $\Delta\Psi$-S model, however, the evidence has a much lower value, despite the latter model having an additional free parameter. In Fig. \ref{fig:results_perts_two_point}, we show the two-point correlation function from this model and compare it with the one from the true underlying $\delta\boldsymbol{\psi}$ field and the reconstructions from the $\Delta\Psi$-S and ALL models. It is evident that in this case the data and not the prior is driving the $\delta\boldsymbol{\psi}$ reconstruction. In Fig. \ref{fig:results_perts_dpsi_ps} we show the power spectra of the reconstructions and in Table \ref{tab:perts_GRF_parameters} list the coefficients of the corresponding fits using equation (\ref{eq:power_spectrum}). The connection between the slope of the power spectrum and stronger large scale correlations is evident: the flattest power spectrum belongs to the model with curvature regularization, while the slope decreases as the correlation function becomes narrower (or $l_{\delta\psi}$ becomes smaller), first for the $\Delta\Psi$-S and then for the ALL model. We note, however, that although $\Delta\Psi$-S gives the value for the amplitude closest to the truth, its parameters $\boldsymbol{\eta}$ are fixed to the true underlying smooth model (a quite unrealistic scenario), which means that the dimensions of the parameter space to explore are significantly fewer compared to the ALL model. The ALL model has the smooth potential parameters $\boldsymbol{\eta}$ free, which in principle could absorb part of the perturbations. However, as discussed in Appendix \ref{app:A}, this is not the case. The fitted smooth potential model is very close to the truth, meaning that any differences between the true total and reconstructed potentials is mostly due to the $\delta\boldsymbol{\psi}$. A parametric-only, purely smooth model is also evaluated, which is obviously insufficient to correctly model the lens, leading to biased values of $\boldsymbol{\eta}$ and reconstructed $\boldsymbol{s}$, and prominent residuals above the noise level (bottom right panel in Fig. \ref{fig:results_perts_images}). These residuals are lower in amplitude and different from the (unmodelled) residuals between the smooth and perturbed data (bottom left panel in Fig. \ref{fig:results_perts_images}), having a correlation coefficient of $0.26$. This means that the perturbations are absorbed into the smooth model parameters and the source to some extent, but not fully \citep[see][for a thorough exploration of this effect]{Bayer2021}. This can be seen in the residual power spectrum, shown in Fig. \ref{fig:results_perts_res_ps}, where the ``unmodelled'' residuals that appear on the large scales have significant power (above the noise) for $k<4$ and the smooth model residuals have 2 to 7 times less power in the same range, yet still also 2 to 7 times more than the noise. \renewcommand{\arraystretch}{1.0} \begin{table} \centering \caption{Power law fits from equation (\ref{eq:power_spectrum}) to the power spectra of the true and reconstructed $\delta\boldsymbol{\psi}$ shown in Figs. \ref{fig:results_perts_dpsi_ps} (top part, Section \ref{sec:model_dpsi}) and \ref{fig:results_both_dpsi_ps} (bottom part, Section \ref{sec:both}). The $\Delta\Psi$ and $\Delta\Psi$-S models give identical fits.} \label{tab:perts_GRF_parameters} \begin{tabular}{rrr} & $\log_{\rm 10} A$ & $\beta$ \\ \hline True $\delta\boldsymbol{\psi}$ (masked) & $-7.20\pm0.01$ & $-3.52\pm0.02$ \\ CURVATURE & $-6.73\pm0.03$ & $-1.40\pm0.03$ \\ $\Delta\Psi$/$\Delta\Psi$-S & $-7.07\pm0.02$ & $-3.15\pm0.07$ \\ ALL & $-7.87\pm0.03$ & $-3.42\pm0.10$ \\ \hline True $\delta\boldsymbol{\psi}$ (masked) &$ -7.21 \pm 0.01 $ & $ -3.43 \pm 0.03 $ \\ XXX n=3 &$ -7.19 \pm 0.02 $ & $ -3.29 \pm 0.09 $ \\ XXF-curv &$ -5.18 \pm 0.02 $ & $ -1.69 \pm 0.07 $ \\ FFF &$ -7.16 \pm 0.02 $ & $ -3.26 \pm 0.09 $ \\ FFF-MAP n=2 &$ -7.46 \pm 0.02 $ & $ -2.75 \pm 0.07 $ \\ \end{tabular} \end{table} In Fig. \ref{fig:results_combined_corner}, we show the full non-linear parameter probability densities for the ALL model. In general, the parameters $\boldsymbol{\eta}$ are distributed similarly to Fig. \ref{fig:results_smooth_corner} but with larger statistical uncertainty. A systematic bias is introduced in $b$, whose lower values become more probable, because the inclusion of perturbations $\delta\boldsymbol{\psi}$ can now absorb some of the overall strength of the lens potential. Similarly, the presence of the perturbations causes $x_0$ to be offset by one pixel instead of half, which was the case in Section \ref{sec:model_smooth}. The same degeneracies are observed as in Fig. \ref{fig:results_smooth_corner} between the parameters $b-q$, $b-\gamma$, $q-\gamma$, and $\theta-\phi$. The latter two have a bi-modal distribution with an extent of roughly $\pm5\deg$. Such small angular offsets between the SIE and the external shear can be understood in terms of the smoothness of the source, which allows for the perturbing field $\delta\boldsymbol{\psi}$ to make up for the difference and still provide solutions with high probability (low residuals). There are no correlations between $\boldsymbol{\eta}$ and the regularization parameters for the source or the potential perturbations, neither between the latter two. However, we observe again the expected anti-correlation between $\lambda_{\rm s}$ and $l_{\rm s}$ and a similar one between $\lambda_{\rm \delta\psi}$ and $l_{\rm \delta\psi}$ (better shown in Fig. \ref{fig:app_all_corner}), i.e. increasing the overall regularization parameters $\lambda$ smooths out the reconstructed fields, as does increasing the correlation length $l$ in the covariance kernels. \subsection{Perturbed lenses and complex sources} \label{sec:both} In reality, we expect complex sources to be lensed by non-smooth lens potentials. Here we combine the perturbed lens potential from the previous section with the complex brightness profile of NGC2623 (a merger) used as source in Section \ref{sec:smooth_complex}. The resulting lensed images are shown in the left column of Fig. \ref{fig:results_both_images}. Although such a lensing scenario could be unrealistically complex - lensing of merging galaxies is not very probable - it serves as an extreme scenario for degeneracies to emerge as a result of the non-linear behaviour approximated by matrix $M_{\rm r}$; from equation (\ref{eq:expanded_dsdpsi}), perturbed deflection angles are associated with incoming rays from a highly structured source, and this information can be lost within the finite resolution of the mock data considered in our examples. We model the system fixing the regularization kernels to the best-performing ones, i.e. an exponential kernel for the source (see Section \ref{sec:smooth_complex}) and a Gaussian for the perturbations (see Section \ref{sec:model_dpsi}). We reconstruct $\delta\boldsymbol{\psi}$ in the same $30\times30$-pixel grid as before, and use $n=3$ for the adaptive source grid, unless otherwise stated. For each of the models presented in Fig. \ref{fig:results_both_images} we either fix (X) or set free (F) each of the three parameter sets $\boldsymbol{\eta},(\lambda_{\rm s},l_{\rm s}),(\lambda_{\rm \delta\psi},l_{\rm \delta\psi})$ and name it accordingly, e.g. model XFX has only ($\lambda_{\rm s},l_{\rm s}$) free to vary. For the fixed values of the parameters we have: the true values for $\boldsymbol{\eta}$ (e.g. see Section \ref{sec:model_smooth} or Table \ref{tab:table_map}), $l_{\mathrm{s}}=0.15$ and $l_{\rm \delta\psi}=0.36$, which are the values fitted to the true source and perturbations as shown in Figs. \ref{fig:spiral_merger_two_point} and \ref{fig:results_perts_two_point}, $\lambda_{\rm s}=44.031$, the mean value from Section \ref{sec:smooth_complex} (see Table \ref{tab:table_mean}), and $\lambda_{\rm \delta\psi}=86780.1$, the mean value from the ALL model presented in Section \ref{sec:model_dpsi} (see Table \ref{tab:table_perts_smooth_mean}). In the first part of Fig. \ref{fig:results_both_images}, we show two models with all the parameters fixed to the truth, one with $n=3$ and one with $n=2$, and three models with only one parameter set allowed to vary. We first note that there is very little difference in the residuals and the reconstructed $\delta\boldsymbol{\psi}$ between the fixed models (despite the many more source pixels for the case with $n=2$) and the one with the source parameters free (XFX). However, allowing the $\delta\boldsymbol{\psi}$ regularization parameters to vary leads to a worse reconstruction and the residuals increase. This is even more prominent if we change the $\delta\boldsymbol{\psi}$ regularization from a Gaussian to a curvature kernel. These $\delta\boldsymbol{\psi}$ solutions have too much structure (low regularization) because they may be actually overcompensating for a low resolution adaptive source grid. In the second part of Fig. \ref{fig:results_both_images}, we see that the residuals and the $\delta\boldsymbol{\psi}$ reconstruction do not improve if we set both the perturbation and source regularization parameters free (i.e., compare models XXF and XFF). As soon as we allow $\boldsymbol{\eta}$ to vary then the residuals do decrease at the cost of a less smooth $\delta\boldsymbol{\psi}$ reconstruction. This is regardless of fixing the source regularization parameters - models FXF and FFF give very similar results. However, the adaptive grid resolution affects the residuals: after fixing all parameters to the MAP values from the FFF model, we set $n=2$ and although the $\delta\boldsymbol{\psi}$ reconstruction does not improve too much, the residuals do (see also Fig. \ref{fig:results_both_res_ps}), in particular, the prominent positive residuals due north with respect to the lens in the models FXF and FFF considerably decrease. This is most likely due to the more degrees of freedom available for the source, which is further supported by a smooth model with $n=1$ that absorbs the perturbations almost down to the noise level. Looking at the two-point correlation functions of the reconstructed $\delta\boldsymbol{\psi}$ shown in Fig. \ref{fig:results_both_two_point}, we note that the prior and the data lie close to each other, which accordingly drives the FFF model. Increasing the adaptive grid resolution leads to somewhat stronger correlations on the larger scales and brings the reconstructed $\delta\boldsymbol{\psi}$ even closer to both the prior and the data. However, in Fig. \ref{fig:results_both_dpsi_ps}, and from the fitted coefficients listed in Table \ref{tab:perts_GRF_parameters}, there is a remarkable agreement between the power spectrum of the FFF model (all the parameters free) and the true $\delta\boldsymbol{\psi}$. The same holds for the reconstructed $\delta\boldsymbol{\psi}$ of the XXX $n=3$ model that has all the parameters fixed to their true values. Hence, despite their different appearance (see the third row of panels in Fig. \ref{fig:results_both_images}) the reconstructed $\delta\boldsymbol{\psi}$ of the FFF (and XXX $n=3$) model have an almost identical power spectrum to the truth. We also note that the residual power spectrum of the FFF and the XXX $n=3$ models, shown in Fig. \ref{fig:results_both_res_ps}, is very similar, with both models being above the noise in the small scales ($k<5$). Curvature regularization is clearly a bad prior for the GRF $\delta\boldsymbol{\psi}$ as it leads to prominent residuals, even more than the difference between the unmodelled perturbed and unperturbed mock systems (see Fig. \ref{fig:results_both_res_ps}), and more extreme values of the reconstructed $\delta\boldsymbol{\psi}$ (see Fig. \ref{fig:results_both_dpsi_ps} and Table \ref{tab:perts_GRF_parameters}). Completely ignoring the existence of any perturbations and modelling the system with a purely smooth model with $n=1$ can reach the noise level (see Fig. \ref{fig:results_both_res_ps}). This is clearly a biased solution that could model away substructure or deviations from the smooth potential. In Fig. \ref{fig:results_combined_corner} we compare the full non-linear parameter probability densities of the FFF model presented here to the ALL model presented in Section \ref{sec:model_dpsi}). Its MAP and mean parameter values, and the 68 per cent confidence intervals are listed in Tables \ref{tab:table_perts_smooth_map} and \ref{tab:table_perts_smooth_mean}. The two models are actually the same but applied to different data, i.e. with a difference source light profile. We can observe three main characteristics of the distributions: i) smaller statistical uncertainties, ii) larger systematic biases, and iii) fragmentation of the probability surfaces, with various local maxima separated by valleys and saddles, given rise to a complex parameter space configuration. The latter reflects the complex and degenerate underlying lens potential perturbations and source brightness profile. The smooth lens potential parameters $\boldsymbol{\eta}$ are correlated in the same way as before but the biases are more significant. The SIE potential strength $b$ is pushed to even lower values as the $\delta\boldsymbol{\psi}$ are now stronger (e.g. compare the reconstructed MAP perturbations between the ALL and the FFF models in Figs. \ref{fig:results_perts_images} and \ref{fig:results_both_images} respectively), $x_0$ and $y_0$ are offset by approx. 1 pixel, and $q$ and $\gamma$ lie several $\sigma$ further than their true values. Only the angles $\theta$ and $\phi$ are not biased and are in fact less degenerate than the ALL model, i.e. their distributions are not bi-modal anymore. This is because of the more detailed structure in the source that cannot be accounted for well by the perturbing field $\delta\boldsymbol{\psi}$ for tilted smooth potentials. All of the regularization parameters have broader distributions except $\lambda_{\rm \delta\psi}$ that is more narrowly distributed around values 3-4 times smaller than the ALL model. This means that more structured and larger in amplitude $\delta\boldsymbol{\psi}$ reconstructions are expected, which is indeed the case as shown in Fig. \ref{fig:results_both_images}. A very strong anti-correlation is observed between the regularization strengths, $\lambda$, and correlation lengths, $l$, in the covariance kernels for both the source and the perturbations. Finally, the complex probability surfaces between the source and potential perturbation regularization parameters (see also Fig. \ref{fig:app_fff_corner}) mean that the two are quite degenerate. The smaller values of $\lambda_{\rm \delta\psi}$ in combination with the broader $l_{\rm s}$ distribution towards higher values indicate that the complexity of the source brightness is absorbed by the potential perturbations. \section{Discussion} \label{sec:discussion} Higher order statistical properties of the brightness profiles of gravitationally lensed galaxies can be incorporated in the semi-linear inversion technique through regularization priors based on physically motivated covariance kernels. In this work, we created mock gravitational lenses using NGC3982 (a spiral) and NGC2623 (a merger) as sources, whose covariance is well-described by a Gaussian and exponential covariance kernel respectively. We found that these physically motivated priors outperform other traditionally used regularization schemes, such as identity and curvature, and we can model each system down to the noise level in almost all cases while simultaneously avoiding overfitting (some residuals remain in the case of perturbed potentials). Using generic covariance priors comes at the cost of introducing additional non-linear parameters (in this case, the correlation length $l_{\rm s}$; see equations \ref{eq:covariance_exp} and \ref{eq:covariance_exp_squared}). Our modelling framework can handle these new parameters and determine their full probability distribution jointly with the other non-linear parameters (e.g. the smooth mass model parameters, $\boldsymbol{\eta}$) at the cost of a now denser source covariance matrix, $C_{\rm s}$, that needs to be inverted (e.g. see equation \ref{eq:min_r}), and slower convergence due to increasing the dimensions of the non-linear parameter space that needs to be explored. However, here we used logarithmic priors on a wide range of $l_{\rm s}$, which might be a conservative choice. One could use observationally driven estimates of $l_{\rm s}$ (or other covariance kernel parameters) derived from populations of putative lensed sources, e.g. constructed from samples of observed lenses, in order to narrow-down the parameter space and speed up the modelling process. In fact, we performed such a test by fixing $l_{\rm s} = 0.21$ for NGC3982, a value well-justified by the observations (see Fig. \ref{fig:spiral_merger_two_point}), and remodelling the corresponding mock lens, achieving a much faster convergence to the same result. The quality of the data, viz. high signal-to-noise and resolution, plays a major role in finding an acceptable solution for the source, regardless of the choice of prior, observationally motivated or not, on the source brightness profile. In the cases examined in Section \ref{sec:smooth_complex}, the data are of sufficiently good quality to drive the solution close to the truth for all tested regularization schemes. For NGC2623, the recovered $l_{\rm s}$ parameter for the case with an exponential covariance kernel - the one matching the true source - lies further than 3$\sigma$ from the truth, despite having the highest evidence. The reverse statement, viz. whether the use of a (correct) prior becomes more important in the case of degraded/noisy data, is yet to be systematically explored. This is particularly relevant for upcoming surveys, such as Euclid and LSST, which are expected to have lower angular resolution than what we examined here. However, our method does prefer the models with the correct priors based on the Bayesian evidence, for the adopted observational setup. Once perturbations to the lensing potential are introduced, we need to approach the problem in a different way. We demonstrated that the effect of $\delta\boldsymbol{\psi}$ can be absorbed in the reconstructed source, especially if the adaptive grid resolution is set to the highest ($n=1$, a common choice), and lead to wrong results on the model parameters, $\boldsymbol{\eta}$, and the source, $\boldsymbol{s}$. This, in turn, leads to spurious structures in the model residuals, unrelated to the original $\delta\boldsymbol{\psi}$, which can be misinterpreted as the effect of a perturbing field of mass substructure \citep[see also][for another study on this]{Chatterjee2019}. Hence, a two-step approach of first running a parametric smooth model to constrain $\boldsymbol{\eta}$ and then modelling the perturbations $\delta\boldsymbol{\psi}$ would be unreliable \citep[unless lower choices for $n$ are used, e.g. see][]{Bayer2021}. The extent of the above statement for perturbed lenses with varying $\delta\boldsymbol{\psi}$ properties, as well as concentrated massive substructures, remains to be explored. Nevertheless, we showed that simultaneously solving for $\boldsymbol{\eta}$, $\boldsymbol{s}$, and $\delta\boldsymbol{\psi}$ gives accurate results in a self-consistent manner. Attempting to reconstruct the perturbing $\delta\boldsymbol{\psi}$ requires a regularizing term (prior) in addition to the one for the source. In contrast to the case of smooth potentials, where the data quality is good enough to drive the source reconstructions to solutions with the desired statistical properties regardless of which regularization scheme is used (see Fig. \ref{fig:spiral_merger_two_point}), the data alone are not sufficient and the form of regularization/prior seems to play a major role in reconstructing $\delta\boldsymbol{\psi}$. Here we examined specifically the curvature and Gaussian covariance kernels, in connection to our choice of a GRF as the true underlying $\delta\boldsymbol{\psi}$. The traditionally used curvature regularization is less flexible as it imposes fixed, long range correlations (see Fig. \ref{fig:results_perts_two_point}), which are in fact stronger than they should and irrecoverably lead to unphysically smooth solutions, seemingly regardless of the quality of the data. The covariance of our assumed GRF, however, can be well approximated by a Gaussian kernel (see Fig. \ref{fig:results_perts_two_point}), but in real galaxies the true covariance of potential perturbations is unknown. More flexibility could be achieved by assuming a covariance kernel described by a number of free parameters, e.g. a Mat\'{e}rn kernel \citep[e.g.][]{Mertens2017,Vernardos2020}, or even a free form two-point correlation function. In addition, theoretically justified $\delta\boldsymbol{\psi}$ priors could be derived based on dark matter models or N-body hydrodynamical simulations. Our method allows for a thorough and quantitative exploration of how different regularization schemes on the $\delta\boldsymbol{\psi}$, as well as on the source, can affect the quality of the reconstructions, eventually ranking them by their Bayesian factors. In Sections \ref{sec:model_dpsi} and \ref{sec:both} we fully model the smooth potential, source, and perturbations in two example cases whose only difference is the brightness profile of the source, i.e. the smooth lens potential and the perturbative field of $\delta\boldsymbol{\psi}$ remain the same. Our optimization strategy (described in Section \ref{sec:optimization}) works quite well, but the extent of statistical uncertainty and systematic biases in the recovered parameters $\boldsymbol{\eta}$, as well as the degeneracy between the regularization parameters for the source and the perturbations, depend on the complexity of the source brightness profile. In the case of the complex source presented in Section \ref{sec:both}, the entire parameter space becomes more structured and degenerate (see Fig. \ref{fig:app_fff_corner}) and systematic biases increase (see Fig. \ref{fig:results_combined_corner}). Most importantly, smoother sources become more compatible with the data and the freedom of the perturbing $\delta\boldsymbol{\psi}$ is increased (i.e. its smoothness reduced), which leads to the latter absorbing the structure of the source. The overall amplitude of $\delta\boldsymbol{\psi}$ is also larger, pushing the strength of the smooth potential (parameter $b$) to lower values. These observations explain why the reconstructed $\delta\boldsymbol{\psi}$ from the FFF model in Fig. \ref{fig:results_both_images} do not visually match the true GRF very well, but despite this the power spectrum is recovered remarkably well (see Figs. \ref{fig:results_perts_dpsi_ps}, \ref{fig:results_both_dpsi_ps}, and Table \ref{tab:perts_GRF_parameters}). The visual differences of the reconstructed $\delta\boldsymbol{\psi}$ compared to the truth (see the FFF and ALL reconstructions in Figs. \ref{fig:results_both_images} and \ref{fig:results_perts_images}, respectively), could be understood in terms of the ``light-constrains-mass'' effect, which we explain here. Within the framework of our method, but also more generally, it is important to clarify how is $\delta\boldsymbol{\psi}$ constrained where the lensed source brightness, and/or, more precisely, the gradient of the source is low or zero. Obviously, in such areas using equation (\ref{eq:dpsi_residuals}) to model brightness residuals becomes problematic; the $D_{\rm s}$ operator, which holds the derivatives of the source at the source plane (deflected) location of the given image pixel(s), becomes zero. Hence, in order to obtain a reconstruction across the entire field of view (or even within a mask) it now becomes obvious that the regularization will be important, particularly where there is low/no source flux. This is analogous - but not exactly - to reconstructing the source brightness on pixels that are not constrained by the data, as could be the case in a fixed grid model. Taking the realization of the GRF $\delta\boldsymbol{\psi}$ field that we used as an example (third-row panel in the left of Fig. \ref{fig:results_perts_images}), the success of our reconstructions depends on how much of the source flux eventually end ups in those crucial areas of the lens plane that have the largest gradients (largest deflection angles). This could play a role in the more degenerate results of the FFF model and its $\delta\boldsymbol{\psi}$ power spectrum amplitude difference with the ALL model (see Figs. \ref{fig:results_perts_dpsi_ps}, \ref{fig:results_both_dpsi_ps}, and Table \ref{tab:perts_GRF_parameters}). This could be mitigated by reconstructing the $\delta\boldsymbol{\psi}$ within a carefully selected region of the lens plane around the lensed source brightness, possibly weighed by the values of the operator $D_{\rm s}$. However, determining the extent of this ``light-constrains-mass'' area may introduce another possible source of degeneracy: the gradient of $\delta\boldsymbol{\psi}$, which is in fact the deflection angle, also enters equation (\ref{eq:dpsi_residuals}), and for any pixel with some given lensed source brightness, regions having the same gradient, e.g. large density differences that lie further away or smaller density differences being closer, can have the same effect. \section{Conclusions} \label{sec:conclusions} We explored the effect of regularization while reconstructing both the source and potential perturbations using the semi-linear inversion technique. Below we summarize the conclusions from this work and outline future directions of application and improvement. \begin{itemize} \item Physically motivated priors for the source galaxies, such as Gaussian and exponential kernels, lead to better results than traditional choices, such as identity and curvature regularization. \item Curvature regularization, a traditionally popular choice, is fundamentally unsuitable as a prior for the GRF $\delta\boldsymbol{\psi}$ perturbations that we examined here. \item The source alone can absorb the structure created by $\delta\boldsymbol{\psi}$ almost down to the noise, especially if a high resolution adaptive grid is used (low value of $n$). This leads to biased source reconstructions and parameters for the smooth potential \citep[see also][]{Bayer2018,Bayer2021,Chatterjee2019}. \item The statistical properties of the $\delta\boldsymbol{\psi}$, particularly the power spectrum, are recovered remarkably well, both for smooth and more complex sources. \end{itemize} Our study constitutes an initial exploration and test of our new code implementation, and as such we restricted ourselves to the four distinct and incrementally more complex examples presented in Section \ref{sec:results}. The successful outcome of this study enables further and more in depth investigations of potential perturbation reconstructions in lensed systems. We propose, but not limit ourselves to, the following directions of future research: \begin{enumerate} \item Here we used a specific GRF as the perturbing field, with specific amplitude ($\approx13$ per cent of the smooth potential) and slope, which we believe is an extreme case, pushing the validity of the approximation of equation (\ref{eq:dpsi_residuals}) to its limit. The type (GRF or other), as well as the associated parameter space of the perturbing field can be now explored more in depth, for different smooth potentials and sources. \item One such case of particular interest would be using isolated massive perturbers as the perturbing $\delta\boldsymbol{\psi}$, and determining how the conclusions of this work apply to it, e.g. comparing to the work of \citet{Vegetti2009a}. \item We have identified an interplay between data quality and priors in determining the best model, which needs to be explored in both directions: at which level of resolution and/or signal to noise ratio the data are driving the solution and the prior begins to play a secondary role, and inversely. \item Our $\delta\boldsymbol{\psi}$ reconstructions away from pixels that contain most of the lensed source flux are constrained mostly by the prior - what we described as the ``light-constrains-mass' effect. A weighed scheme - similar to adaptive regularization - could be devised to suppress terms in the $D_{\rm s}$ appearing in equation (\ref{eq:dpsi_residuals}) that are very low or zero. \end{enumerate} Finally, our new implementation of the method, the Very Knotty Lenser code, is made publicly available\footnote{\url{https://github.com/gvernard/verykool}}. \section*{Data availability} The data that support the findings of this study are openly available in github at \url{https://github.com/gvernard/verykool}. \section*{Acknowledgements} GV and LVEK were supported through an NWO-VICI grant (project number 639.043.308). GV has received additional funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodovska-Curie grant agreement No 897124. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,286
Q: Protocol buffer Database abstraction framework Has anyone heard of an enterprise grade database abstraction layer that builds on Google Protocol Buffers I can foresee such a DB tool set would have great possibilities from mobile computing all the way through to enterprise system development. A: I reckon any key-value store (eg. Redis) will do? Maybe Riak would be a decent candidate as it provides a protobuf API. Eventually all you need to do is handle serialization and deserialization but that should be a rather thin layer on top of a lightweight client which does not attempt to do any of that for you.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,140
\section{Introduction: LARES 2 and frame-dragging} \label{intro} The main objective of the LARES 2 space experiment is a measurement of General relativitic frame-dragging \cite{bib1,bib2,bib3,bib4,bib5,bib5bis,bib6} with an accuracy of approximately 0.2\%, approaching parts in a thousand as justified both in terms of the error budget calculated below and on the basis of Monte Carlo simulations and covariance analyses presented in the next paper (Paper II: \cite{bib7}). Thus LARES 2 will basically improve by about an order of magnitude the best measurements of frame-dragging that can be achieved by the LARES space mission only (without LARES 2) \cite{bib8,bib9,bib10,bib11,bib12,bib13,bib14,bib15,bib16} In addition LARES 2 will provide important contributions in Space Geodesy and Geodynamics, as described below. Frame-dragging is the change of orientation of the axes of local inertial frames where the equivalence principle, at the basis of General Relativity, holds \cite{bib6}. The equivalence principle states that in local freely-falling frames all the laws of physics are the laws of Special Relativity \cite{bib3,bib5,bib19}. In other words, in local freely-falling frames it is possible to eliminate the effects of the gravitational field in the sense of making them arbitrarily small in a sufficiently small spacetime neighborhood of a freely-falling frame. The axes of these local inertial frames are determined in General Relativity by gyroscopes. However, the gyroscopes are not fixed with respect to the ``distant stars'' (as in classical mechanics) but they are dragged by the mass-energy currents, such as the rotation of a nearby body. This effect -- also called the Lense-Thirring effect [1,2,6] -- is formally similar (in the weak field and slow motion approximation) to the change of orientation of a magnetic dipole due to the magnetic field generated by an electric current. Frame-dragging is very tiny (but observable, as we discuss below) around the rotating Earth but has huge effects around rotating black holes. Thus frame-dragging plays a key role in the dynamics of a body and of the accretion disk around rotating black holes and in the dynamics of jets in active galactic nuclei \cite{bib4}. The first direct observation in September 2016 of gravitational waves with the two LIGO laser interferometers opened the era of gravitational wave astronomy \cite{bib22}. The observation captured the spectacular collision of two black holes forming, as predicted by General Relativity, a single rotating black hole with an enormous release of energy in the form of gravitational waves. There is no evidence that the initial black holes in the 2016 event had significant spin, but mutual frame-dragging of colliding initially rotating blacks holes will be an important factor in the analysis of the signal of the gravitational waves emitted by such systems, when they are detected. The accurate measurement of frame-dragging will thus be an important ingredient in the study of the coalescence of rotating black holes. With LARES 2 we could further accurately test these fundamental physics theories. Frame-dragging of a rotating planet produces on its satellites a precession of the orbital angular momentum of a satellite due to the angular momentum of the central body, usually described as the change of orientation of its nodal line (the intersection of its orbital plane with the equatorial plane of the planet). The Lense-Thirring drag of a satellite orbital plane and node by the angular momentum of the central body has a sense of rotation that is the same of the rotation of the central body (see Figure 1). A number of gravitational theories alternative to General Relativity (the so-called f(R) theories) have been proposed in the attempt to elucidate the profound mystery of dark energy and quintessence and so to explain the accelerated expansion of the universe. Some of these theories (such as Chern-Simons gravity and string theories equivalent to it) predict an outcome for frame-dragging different from General Relativity, even around the rotating Earth [19]. \begin{figure} \centering \includegraphics[width=0.640\textwidth]{butterfly_raster.eps} \caption{The idea of the LARES 2 experiment (originally called LAGEOS 3) \cite{bib28,bib29} In the figure we show frame-dragging and the Newtonian precession of the nodes of two satellites with supplementary inclinations such as LARES 2 and LAGEOS (Supplementary inclinations: $i_{1}+i_{2}=180^\circ$).} \label{fig:1} \end{figure} Unfortunately, if the mass of the central body is not spherically symmetric, the nodal line of a satellite is also affected by a shift induced by Newtonian effects. When the Earth's gravitational potential is expanded in spherical harmonics, the {\itshape even zonal harmonics} are those of even degree and zero order \cite{bib27}. They represent axially symmetric deviations from spherical symmetry of the gravitational potential which are also symmetric with respect to the equatorial plane of the body. The main Newtonian secular drifts of the nodal longitude of a satellite arise from the Earth's even zonal harmonics. In particular, the largest node shift is by far due to the even zonal of degree two, J$_{2}$ (the Earth's quadrupole moment). The even zonal harmonics are extremely well measured by a number of techniques. Nevertheless even their tiny uncertainty produces a systematic bias in the measurement of frame-dragging. Even a tiny relative uncertainty of 10$^{-7}$ or less in the quadrupole moment corresponds to an uncertainty in the nodal rate comparable in order of magnitude to the frame-dragging effect. To eliminate all the errors due to the even zonal harmonics, in 1984-1989 we proposed the LAGEOS 3 satellite ([24,25,26,27] see also: [29,30] and [31, 32]). LAGEOS 3 is now called LARES 2. The idea of the LARES 2 space experiment is shown in Fig. 1. Whereas the Earth even zonal harmonics produce a shift of the nodal line of a satellite that is equal and opposite for two satellites with supplementary inclinations and equal semimajor axis, frame-dragging produces a shift of the nodal line that is in the same sense of rotation of the Earth for both satellites, independently of their inclination. (Supplementary inclinations: $i_{1}+i_{2}=180^\circ$.) Thus, by adding the measured residual nodal shifts of two such satellites, we can completely eliminate the components due to the errors in the Earth's even zonal harmonics while doubling the frame-dragging effect. \section{A brief history of the tests of frame-dragging } In this section we report a brief summary of the state-of-the-art tests of frame-dragging and in particular of the LAGEOS, LARES and LARES 2 space missions. In 1984-1989 a new laser-ranged satellite called ``LAGEOS 3'', identical to the LAGEOS satellite (launched in 1976 by NASA) was proposed with orbital parameters identical to those of LAGEOS but a supplementary inclination, that is with inclination i = 70.16$^{o}$ and semimajor axis = 12270 km. The orbital eccentricity was required to be nearly zero (for LAGEOS it is equal to 0.0048). At that time the state-of-the-art Earth gravity field determination was GEMT1. The main scientific objective of the LAGEOS 3 satellite was to be a measurement of frame-dragging, discussed in the previous section. Beginning in the 1960s, the Gravity Probe B space mission was developed in the USA with the goal of a 0.1\% test of frame-dragging. Gravity Probe B was launched in 2004. But, due to unexpected systematic errors affecting the gyroscopes, the final result of its test of frame-dragging achieved an accuracy of only approximately 20\%, much less stringent than expected \cite{bib37}. In 2004-2010, using the data of LAGEOS, LAGEOS 2 (a laser-ranged satellite almost identical to LAGEOS, launched in 1992 by the ASI and NASA) and the vastly improved new gravity field determinations from the Space Geodesy mission GRACE, measurement of frame-dragging achieved an accuracy of approximately 10\% \cite{bib9,bib10,bib14,bib15,bib39,bib40}. GRACE is an outstanding space mission that has improved the knowledge of the Earth gravity field by several orders of magnitude \cite{bib41,bib42}. In 2012 the new laser-ranged satellite LARES was successfully launched by ASI with the new ESA launcher VEGA, mainly built by AVIO and ELV. LARES' main goal is a measurement of frame-dragging with a few percent accuracy. In 2016, using LARES, LAGEOS and LAGEOS 2 and the Earth gravity field determinations from GRACE, frame-dragging was measured with an accuracy of about 5\% (\cite{bib8} see also \cite{bib16}). GRACE, while still providing data, it is operating now almost a decade beyond its planned lifetime. A new GRACE Follow-On space mission, scheduled for launch in 2017, will continue to improve the accuracy of measurement of the Earth gravity field and its variations. \section{Error Sources in The LARES 2 Experiment} In this section, we present an error analysis of the gravitational and non-gravitational perturbations that will affect the LARES 2 space mission and we conclude that LARES 2 will allow a measurement of frame-dragging with an accuracy of a few parts in a thousand. We consider the four main sources of systematic errors in the LARES 2 experiment: orbital injection errors and the effect of the even zonal harmonics, the effect of non-zonal Earth harmonics and tides, solar and albedo radiation pressure, and thermal drag \cite{bib28,bib29,bib30,bib31,bib32,bib33,bib34,bib35,bib36}. Each of these is estimated to produce an error of approximately 0.1\%, which when root-sum-squared (RSS) added results in an overall 0.2\% systematic error estimate. The discussion below also describes some smaller perturbing effects that contribute insignificantly to the RSS error. Monte Carlo simulations and covariance analyses, which fully support the present error budget, are presented in the next paper [7]. \subsection{Orbital Injection Errors} The main error source in the originally proposed LAGEOS 3 experiment was due to the satellite orbital injection errors and in particular to the injection error in the inclination. No launch vehicle can achieve an orbit for LAGEOS 3, or LARES 2, which has an inclination {\itshape exactly} supplementary to that of LAGEOS and has exactly the same semimajor axis. Thus the cancellation of even-zonal errors would not be perfect, and that would introduce a systematic error in the measurement of frame-dragging proportional to the uncertainty in the quadrupole moment times the deviation of the inclination from that exactly supplementary to the one of LAGEOS. Today, thanks to the space missions GRACE and to the forthcoming GRACE Follow-On, the knowledge of the Earth gravity field has dramatically improved. The knowledge of the Earth spherical harmonics expansion has improved by about three orders of magnitude with respect to the older, 1987, GEMT1 Earth gravity field. The two groups responsible for the GRACE Follow-On mission (CSR of the University of Texas at Austin and GFZ of Potsdam) have estimated that at the time of the observations of GRACE Follow-On (with launch scheduled in 2017), the relative mean uncertainty in the quadrupole moment will be approximately 10$^{-8}$ (that is, a mean uncertainty of about 0.5 $\cdot$ 10$^{-11}$ in its mean value of $\cong$ 0.484 $\cdot$ 10$^{-3}$ ); see the next paper on LARES 2 \cite{bib7}. A 3-sigma orbital inclination injection error (about 0.15 degrees) of the new launch vehicle VEGA C induces an error of only about 10$^{-3}$ in the frame dragging due to non-perfect supplementary inclination of LARES 2 with respect to LAGEOS. A smaller uncertainty would be induced by the injection error in the semimajor axis of LARES. So the orbital injection error will be: \begin{center} \textbf{Orbital injection error $\cong$ 0.1\%} \end{center} \subsection{Non-zonal Earth harmonics and Tides} The non-zonal Earth spherical harmonics do not produce secular effects on the node, however they produce periodic nodal shifts that may introduce a bias in the measurement of frame-dragging \cite{bib27}. Thanks to GRACE and GRACE Follow-On, the spherical harmonics relevant to the frame-dragging measurement (the ones with the lowest degree) have an accuracy for the LARES 2 experiment improved by two or three orders of magnitude compared to the 1989 analysis with GEMT1. The Earth tidal models have also drastically improved compared to those available for the older 1989 analysis, and it is expected that they will further improve in the near future. Furthermore, in our recent paper \cite{bib8} we demonstrated a new method that allows us to dramatically reduce the error due to periodic effects with known periods on the orbits of LAGEOS and LAGEOS 2 (tides, non-even zonal harmonics and some non-gravitational perturbations). Therefore the bias in the measurement of frame-dragging with LARES 2 and LAGEOS due to non-even zonals and tides will at a level of about 0.1\%: \begin{center} \textbf{Non-even zonals and tides error $\cong$ 0.1\%} \end{center} \subsection{Non gravitational perturbation} In the following subsections we estimate the errors due to direct solar radiation pressure, satellite eclipses, albedo, thermal drag, thermal drag and satellite eclipses, and particle drag \cite{bib30,bib31,bib33,bib34,bib44,bib45,bib46,bib47,bib48,bib49}. \subsection{Direct solar radiation pressure} The thrust of a satellite due to solar radiation pressure is proportional to the solar constant (radiation energy from the Sun per unit of time per unit of area), to the Cr of the satellite, a parameter depending on the reflection properties of the satellite and on its geometry, and on the cross-sectional area-to-mass ratio, A/M. The shift of the node of a satellite with small orbital eccentricity (such as LAGEOS) is proportional to its eccentricity. The solar constant is extremely well measured, and so is the Cr of LAGEOS, thanks to 40 years of observations of its orbit. Furthermore A/M for LAGEOS is approximately equal to 0.007 cm$^{2}$/g and is the smallest of any satellite except LARES, and the orbital eccentricity of LAGEOS is only 0.0048. Thus the systematic error in the measurement of frame-dragging on the node of LAGEOS due to the uncertainty in modeling solar radiation pressure estimated to be at the level of less than 0.1\%. For LARES 2, in order to reduce the systematic error due to radiation pressure (both direct solar and albedo, see below), as well as the systematic errors due to the other non-gravitational perturbations, we will increase its area-to-mass ratio by a factor of at least 1.5 with respect to LAGEOS, and also to decrease its orbital eccentricity. The reduction of its A/M ratio can be achieved by keeping its mass as much as possible equal to that of LAGEOS and by reducing its size. According to some preliminary calculations, depending also on the VEGA C launch constraints, we propose the mass of LARES 2 to be 350 kg, or more, and its diameter to be about 40 cm. And we propose an eccentricity of LARES 2 of 0.0025 or less. This will gain a factor of about 3, with respect LAGEOS, to reduce the nodal shift of LARES 2 due to solar radiation pressure. This will lead to a systematic error in the measurement of frame-dragging with LAGEOS and LARES 2, due to solar radiation pressure, that will be less than 0.1\%. \subsection{Satellite eclipses} An error in the modeling of the solar radiation pressure is that due to the satellite eclipses by the Earth. However the boundary of the shadow region generated by the Earth is well measured and included in our orbital estimators: GEODYN (NASA), EPOSOC (GFZ) and UTOPIA (CSR-UT), thus we do not expect a significant contribution from this source, as long as the step-size of the numerical orbital integrator is kept small enough to capture all crossings accurately. In conclusion, based on past extensive analyses, and by also considering the improved cross-sectional area to mass ratio of LARES, we will have an error in our experiment of less than 0.1\% due to direct solar radiation pressure coupled with satellite eclipses. \subsection{Albedo} Albedo produces the radiation pressure due to the sunlight reflected by the Earth surface. By reducing the cross-sectional area-to-mass ratio of LARES 2, according to previous extensive calculations of the effect of the albedo on LAGEOS and LAGEOS 3, and with the past improvements in the albedo models, we would then have a systematic error in the measurement of frame-dragging with LARES 2 and LAGEOS due to albedo radiation pressure of about 0.1\% \begin{center} \textbf{Albedo error $\cong$ 0.1\%} \end{center} \subsection{Thermal drag} The electromagnetic radiation from the Sun and the radiation from the Earth each instantaneously heat one hemisphere of LAGEOS. Because of the finite heat conductivity of the body, there is an anisotropic distribution of temperature on the satellite and thus there is an anisotropic flux of energy, $\sim$ T$^{4}$, and momentum from the surface of the satellite, giving rise to its acceleration. If the satellite is spinning fast enough, the anisotropy in the satellite temperature distribution is mainly latitudinal. This is called the Yarkovsky or Yarkovsky-Schach effect. Another thermal thrust effect was discovered on LAGEOS by David Rubincam, the Earth-Yarkovsky or Yarkovsky-Rubincam effect \cite{bib44,bib45}. Infrared radiation from the Earth is absorbed by the LAGEOS retro-reflectors. Therefore, due to the retro-reflector's thermal inertia and due to the past rotation of the satellite (today LAGEOS is almost rotationally at rest), there was a LAGEOS latitudinal temperature gradient. The corresponding thermal radiation caused an acceleration with an along-track component opposite to the satellite motion. An extensive analysis of the thermal forces acting on LARES is now under scrutiny by Richard Matzner and our LARES-team colleagues of the University of Texas at Austin \cite{bib47,bib48}. Although today the LAGEOS satellite is rotationally almost at rest with respect to inertial space (i.e. it is spinning extremely slowly), Rubincam has calculated that there will still be an along-track component of the Yarkovsky-Rubincam effect but no out-of-plane component. This out-of-plane component is the one potentially responsible for the nodal drag. Therefore that drag is today substantially null for LAGEOS. In regard to the solar Yarkovsky effect, if the orbit does not intersect the Earth's shadow, the force is directed away from the Sun, and will simply add to the effect of the direct solar radiation pressure and thus will be accounted for by our estimation of the C$_{r}$ of LAGEOS (see our forthcoming paper analyzing the thermal drag on LARES 2, paper IV \cite{bib49}). \subsection{Thermal Drag and Satellite Eclipses} When the orbit of LAGEOS intersects the Earth's shadow we still have the Yarkovsky effect. Due to the satellite's thermal inertia, the satellite takes time to cool down in the shadow and then takes time to warm back up after it exits the shadow. Thus, in general, we will get radial, out-of-plane, and along-track thermal drag forces, with the Yarkovsky effect depending on the Sun-satellite orbital geometry. The LAGEOS satellite is in the shadow of the Earth for less than 1/20 of its orbital time; the cross-sectional area-to-mass-ratio, A/M, for LARES 2 will be about 1.5 times smaller than that of LAGEOS, and LARES 2 should be injected into orbit with a small spinning rate. The use of an alloy with high thermal conductivity will further reduce the thrust due to thermal anisotropy. Given the satellite characteristics, it will be possible to use a copper alloy that has a thermal conductivity about 3 times higher than the one of the tungsten alloy used for LARES. Finally, LARES 2 should be made of a single sphere (as LARES) and not by assembling different parts (as LAGEOS). With these facts, based on previous extensive analyses of the Yarkovsky effect, we estimate \cite{bib49} the systematic error due to the coupling of the Yarkovsky effect with the eclipses by the Earth at the level of 0.1\%: \begin{center} \textbf{Thermal Drag Error $\cong$ 0.1\%} \end{center} \subsection{Particle drag} Particle drag is responsible for an along-track acceleration of a satellite. In the case of LAGEOS this leads to a very small decrease of its semimajor axis of approximately 1 millimeter per day. However, according to well-known theorems of celestial mechanics, the node of a satellite does not change due to particle drag and this result applies also to the case of a rotating atmosphere. Therefore, even by considering a rotating atmosphere at the LAGEOS altitude and by considering variations in its density, we obtained a negligible nodal shift of LAGEOS and LARES 2 due to particle drag \cite{bib33}. \subsection{Measurement Errors of the Orbital Parameters} Satellite Laser Ranging (SLR) provides the position of the LAGEOS satellites with a precision of the normal points of less than a millimeter. This precision is certainly enough to accurately measure the shift of the node of LAGEOS due to frame-dragging (almost 2 meters per year). However, the other LAGEOS and LARES 2 orbital elements must also be measured with high accuracy. Whereas the semimajor axis and the eccentricity of the LAGEOS satellites are measured with enough accuracy, the measurement error in the inclination of both LAGEOS and LARES 2 may introduce an error in the measurement of frame-dragging (the corresponding uncertainty in the determination of the Lense-Thirring effect using LAGEOS and LARES 2 will be induced by the error in the measurement of the inclination and not by the uncertainty in the modeling of the inclination, i.e. the uncertainty in the prediction of its behavior). The measurement uncertainty in the inclination of LAGEOS is mainly due to the atmospheric refraction. However, the refraction errors in the measurement of the inclination drop significantly at elevations above 20 degrees and most of the laser-ranging observations are indeed at elevations above 20 degrees. Therefore, by keeping in the data analysis only those observations corresponding to elevations above 20 degrees, we can reduce the error in the measurement of the inclination to a small level that, when propagated into the nodes of LAGEOS and LARES 2, would correspond to an error of no more than 0.1\% of frame-dragging. \section{Final error budget} The RSS error budget of the LARES 2 experiment is then summarized in Table 1. \begin{table}[h!] \centering \begin{tabular}{|l|l|} \hline Source of Error & Estimated error \\ \hline Injection Error and Even Zonal Harmonics & $\cong$ 0.1\% of frame-dragging \\\hline Non-zonal harmonics and tides & $\cong$0.1\% of frame-dragging \\\hline Albedo &$\cong$ 0.1\% of frame-dragging \\\hline Thermal Drag and Satellites Eclipses & $\cong$0.1\% of frame-dragging\\\hline Measurement Error of the LAGEOS and LARES 2 Orbital Parameters & $\cong$ 0.1\% of frame-dragging \\\hline & \\ \textbf{Total RSS Error} & \textbf{$\cong$0.2\% of frame-dragging} \\ \hline \end{tabular} \caption{Relative errors in the LARES 2 measurement of frame dragging}\label{tab1} \end{table} We have not included here the errors, listed above, which are smaller than 0.1\% since they would only affect the next decimal digit of the $\cong$ 0.2\% RSS total error. This LARES2 error budget was confirmed by the Monte Carlo simulations and covariance analyses shown in the next paper \cite{bib7}. \section{Mission requirements, orbital parameters and satellite characteristics} Here we finally report the required LARES 2 orbital parameters and satellite characteristics. The LARES 2 satellite must have the same semimajor axis of LAGEOS but orbital inclination supplementary to that of LAGEOS. The eccentricity must be zero in order to minimize the non-gravitational perturbations. The maximum deviations from the optimal orbital parameters are dictated by the requirement of a 0.1\% error due to the non-perfect elimination of the uncertainties due to the even zonal harmonics. Thus, the proposed LARES 2 orbital parameters must be: \begin{itemize} \item Semimajor axis 12270 km $\pm$ 20 km \item Inclination of LARES 2 = 70.16$^{o}$ $\pm$ 0.15$^{o}$ (supplementary to that of LAGEOS) \item Orbital eccentricity: between 0 and 0.0025 \end{itemize} These values are fully compatible with the VEGA C 3-$\sigma$ injection accuracies. The mass and radius of the satellite must be chosen to minimize the cross-sectional-to-mass ratio, thus minimizing the non-gravitational perturbations. However, there is a limitation on the mass (between 350 kg and 400 kg) that can be injected to an altitude of 5900 km using the VEGA C launch vehicle. The higher value in the proposed range is the preferred one because it will further reduce the effect of the non-gravitational perturbations thus reducing the final error budget. The proposed radius is about 20 cm. \section{LARES 2 and space geodesy} Satellite Laser Ranging (SLR) now provides uniquely the most accurate definition of the origin of the International Terrestrial Reference Frame (ITRF) and has an equal share with VLBI in the definition of its scale. This is accomplished using the two LAGEOS SLR targets, designed for cm-level geodesy and not for today's goals of 1 mm and 0.1 mm/y. The precise orbits of the GNSS satellites are referred to reference frames based on the ITRF frame. The ITRF will realize the United Nations (UN), ``Global Geodetic Reference Frame (GGRF)", the reference standard for all applications. The UN adopted this proposal only recently (UN resolution A/RES/69/266, \url{http://www.unggrf.org}). The repercussions, however, will last for decades to come. LARES, the third SLR target with its improved specifications, enhanced the status quo ante. But even with LARES we are far from having the ideal ``SLR Constellation'', compared to the GNSS ones. The ideal constellation would comprise a number of LARES-class spacecraft. The development of several LARES-like satellites in combination with the existing LAGEOS, LAGEOS 2 and LARES satellites will deliver a large number of tracking opportunities with multiple SLR targets above the horizon of each ground station a few times per day (Figure 2). \begin{figure} \centering \includegraphics[width=0.640\textwidth]{fig2.eps} \caption{Conceptual s/c distribution for a hypothetical future SLR Constellation with three regional sub-networks A, B, C. } \label{fig:2} \end{figure} The construction and launch of LARES 2 will provide the fourth satellite of the SLR Constellation. With LARES 2 added to the constellation we can achieve much better results compared to today's situation. The analysis of almost three years of data from LAGEOS 1 and LAGEOS 2, since the LARES launch (2012-2014), compared to those from a similar time span before LARES was launched, indicates that the mean site position improved by 17\% while the Earth Orientation Parameters (Polar motions and Length-of-Day) improved by 21\% \cite{bib51,bib52}. The addition of LARES 2 and more LARES-class targets will improve the current knowledge by more than the square root law, since it is mainly the geometry of the problem that improves, something that cannot be described by simple statistical arguments. In addition to the optimal determination of the ITRF and subsequently its optimal distribution to users via the GNSS orbits, the estimation of other terrestrial geophysical parameters, such as the secular evolution of low degree zonal harmonics, the terrestrial tides, elastic properties of Earth, etc., will also benefit from the enhancement of the SLR space segment. \section{Summary and conclusions} The Gravity Probe B experiment has published in 2011 a measurement of frame-dragging with accuracy of about 20\%. With the LARES space experiment we have so far achieved a 5\% measurement of frame-dragging [8]; this test could eventually be improved to reach a final accuracy of about 2\%. With LARES 2 we could obtain a 0.2\% test of frame-dragging. Thus the LARES 2 satellite would provide a factor 10 improvement over the best test of frame-dragging that could be achieved with the satellites orbiting today. Monte Carlo simulations and covariance analyses presented in the next paper have fully confirmed the present error analysis (Paper II [7]).
{ "redpajama_set_name": "RedPajamaArXiv" }
2,614
«Динамо» — бывший украинский футбольный клуб из Луганска. Основан в 1930 году. Лучшее достижение в первенстве Украины — 3 место во Второй лиге в сезоне 1994/95. Летом 1995 года команда объединилась с «Азовцем» (Мариуполь). История Команда была создана в 1930 году. К 1935 году она считалась одной из лучших футбольных команд в Донбассе. После окончания футбольного сезона в 1935 году, ведущие игроки команды перешли в команду Луганского паровозостроительного завода, куда их переманили. Из «Динамо» ушли, в частности: Григорий Носко, Николай Морозов, Михаил Калмыков, Николай Локотош. В связи с уходом ведущих игроков, команда «Динамо» прекратила существование. Повторно футбольная команда «Динамо» была образована в Луганске в начале 1937 года. В том году динамовцы приняли участие в розыгрыше весеннего и осеннего футбольных первенств города Луганска в группе "Б". Команда просуществовала до начала Великой Отечественной войны. В конце 1944 года команда «Динамо» была возрождена. В её состав вошли сотрудники НКВД, большинство из которых было специально переведено для прохождения службы из Харькова в Луганск. В 1945 году динамовцы принимали участие в соревновании на первенство города Луганска. В Чемпионате СССР «Динамо» стало выступать в первой лиге с 1947 года, заняв 11-е место из 13-ти. В 1948 году команда заняла 4-е место (из 8-ми), а в 1949 году – 18-е (последнее). После неудачно сложившегося сезона 1949 года, команда «Динамо» прекратила своё существование, а её игроки пополнили составы других футбольных команд (в основном – «Трудовые резервы» из Луганска). В самом начале 60-х г.г. команда «Динамо» была возрождена и в течение нескольких лет принимала участие только в чемпионатах Луганской области. Снова футбольная команда «Динамо» была образована в Луганске в ноябре 1990 года (ФК «Динамо»). Сначала клуб выступал в Чемпионате Луганской области среди любительских клубов (в 1991 и 1992 годах был чемпионом области), а потом и страны – в переходной лиге, а затем – во второй. Во второй лиге чемпионата Украины клуб отыграл два сезона. Лучшее достижение в первенстве Украины — 3-е место во второй лиге в сезоне 1994/95 г.г. Летом 1995 года команда объединилась с мариупольским «Азовцем», после чего прекратила своё существование. Некоторые известные игроки В список включены игроки клуба, значимые согласно ВП:ФУТ Александр Алпатов Владимир Гавриленко Зиновий Гершин Виталий Голубев Николай Красюк Николай Кузнецов Евгений Пестов Александр Пушкарский Нукури Сихарулидзе Георгий Топорков Владимир Бедный Сергей Богатырёв Алексей Гетьман Юрий Гетьман Герман Горбунов Виталий Дунай Виталий Капинус Дмитрий Кара-Мустафа Виталий Ковтун Владимир Кузовлёв Юрий Лень Сергей Максимич Андрей Мухин Константин Пинчук Николай Подлесный Андрей Порохненко Александр Романенко Сергей Свидченко Александр Севидов Андрей Скоморохов Дмитрий Скотаренко Фёдор Сорока Валерий Туркин Николай Федющенко Геннадий Черников Алексей Чистяков Сергей Ярмолич Юрий Ярошенко Михаил Поцхверия Статистика Примечания Спортивное общество «Динамо» Футбольные клубы СССР Футбольные клубы Украины, прекратившие существование Футбольные клубы Луганска
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,172
Philippine History in the 19th Century Told Through Rare Photos "The only known photo about José Rizal's execution was actually taken by a Spaniard who happened to have a bookstore that time selling a copy of Noli Me Tangere novel too. This same photo by Manuel Aria showing Rizal, behind him are Filipino soldiers holding rifles forced to shoot him if not the Spanish soldiers behind them will shoot them too, later became a serious mistake that stirred a rallying cry and ended the Spanish rule in the Philippines" says John Silva, a writer, arts & culture consultant, tour guide and blogger in "OUR VISUAL HISTORY Discovering the 19th and Early 20th Century Philippines through Photographs" talk in Ortigas Foundation Library. Silva has been an avid photo collector for the past 8 years about the works of early photographers in the Philippines he acquired from travels and his work. He narrates about the first recorded photographs in the country that came from Sinibaldo de Mas, a Spaniard who always had numerous complaints about the insolence of the natives like not removing their hats or not giving their way whenever he pass by to them in the sidewalks. The camera he used is the daguerreotype and later wrote the Informe sobre el estado de las Filipinas en 1842 (A Report on the Status of the Philippines in 1842). Jose Rizal's Execution Félix Laureano's photo the Lavando la ropa, may be the earliest account of a homosexual / gay ever existed and recorded in the country. In 19th century since there are no malls yet, after the people attended the mass in the early morning, the town becomes half-empty and during mid-morning everyone goes to the river to socialize while bathing, washing clothes, to make chit-chats or see the apple-of-their-eyes. In his photo, he mentioned "Three dalagas and a tauo, sitting on the green grass beside the river and washing clothes, their minute feet being lapped by the crystal clear current. The tauo, who can be identified by his manners, is binabayi, agui, and has the balutan of dirty clothes near him." Here, binabayi, means effeminate". Only a local could identify a homosexual in context using local expressions. Silva also shows a photo of a woman in a carriage beside her groom to church and a band after them with a purpose to cheer her up since it is an arranged wedding.Francisco van Camp's Indígena de clase rica, a photo of a Mestiza Sangley-Filipina (of mixed race) in a long curly hair with coconut oil or just came from a shower holding a fan half-opened which meant the girl is single and a lady of pleasure. Photos of the St. Louis World Fair in 1904 which brought 2,000 of our tribe men was marked as a fetish display of Filipinos that one of the fair's highlights is an Igorot roasting a pig daily throughout the exhibition. Rizal himself condemned the unfair or rather inhuman treatment of them abroad. These photos from his 5,000 pieces of collection are so rare that Silva says should be seen by every Filipinos since we seemed to be only Filipinos by name, we can recite and sing our national anthem but we don't know our history this much. He shares a few sentiments over historical sites that are being destroyed like the Jai Alai, one of the most beautiful art deco in the Philippines which was destroyed because of a mayor of Manila was maybe having a bad hair day at that time. No article can be fit enough to express the meanings of these photos instead the images itself should be shared to everyone. He is also looking for a sponsor of these priceless photos for future publication. Felix Laureano's Recuerdo de Filipinas book about a woman in an arranged marriage Indigena de Clase Rica St. Louis World Fair 1904 If you want to check out Silva's blog entitled JOHN'S THOUGHTS AND DEEDS you may click here The Parisian Life Painting of Juan Luna "The Parisian Life" painting is also known as Interior d'Un Café or "Inside a Café" , even titled in some books ... This Hidden Private Resort Features An Instagrammable Swimming Pool with a Backdrop of a Real Hill and a Creek Just A Few Minutes Away Recommended Itinerary Of Must - See Tourist Spots in Norzagaray, Bulacan You Should Visit SAMPAGUITA BEACH RESORT IN BAUAN, BATANGAS Restaurants in Tarlac: A List of What To Eat in Tarlac Province Earth Hour 2011: Let the Philippines be Top in Glo... Philippine History in the 19th Century Told Throug... A Glimpse of UST and its Lumina Pandit 400-year ol... Of Wishing Lamps and The Battle of Manila Join Carlos Celdran in Manila Transitio 1945
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,369
{"url":"https:\/\/tobydevlin.com\/blog","text":"### The Sexy Sexy World Of JSON Validation\n\n##### 2019-09-18 15:46:43 +0000\n\nJSON is everywhere, sometimes it\u2019s hidden from you or your user but its always there. Thousands of applications rely on correctly formatted data to work correctly with little format checking. Most of the time the data generators should have been...\n\n### Easy Docker Containers on Raspery Pi With Portainer.io\n\n##### 2019-04-29 20:26:59 +0000\n\nPortainer is a great docker management GUI which is open source, hosted on GitHub. We\u2019re going to shove it onto a raspberry pi and run a few images. This guide assumes a few things: * You have a Pi already running Rasbarian Lite, connected to your network with ssh access * You know what Docker is * You want to make managing a home server really easy. So, here we...\n\n### Kubernetes Setup on a Raspberry Pi\n\n##### 2019-04-27 17:06:57 +0000\n\nI want to be able to develop apps in my spare time. Modern, scalable, disposable apps. The current hip thing (other than serverless functions, but I\u2019m more of a back end dev so here we are) is to build a...\n\n### What Even Is Mongo?\n\n##### 2018-10-06 16:31:55 +0000\n\nThis is a POC for a MongoDB interface Nothing major, but document model databases allow the chains to be lifted on certain applications that can make them amazingly flexable. They\u2019re not without their caviats but knowing how to use one would be a tick of a box when being asked to make an app. This example MongoDB for Python using Mongo Client. It assumes you have all the binarys for...\n\n### Using Sympy for Analytical Maths\n\n##### 2018-08-25 18:31:55 +0000\n\nThis is an example of how i would use sympy to evaluate a set of analytical questions, for example: Find $$\\forall a \\in [0,12,24,30,99]$$ and $$b=100$$: Where $$\\alpha = 2.341$$, $$\\beta = e^x$$ First we need to import the library...\n\n### GT Coursework\n\n##### 2018-03-20 09:28:00 +0000\n\nThis is a copy of my Game Theory Coursework I completed for the course at cardiff. Personally I found the subject facinating but the course very introductory.\n\n### Installing LaTeX on Windows\n\n##### 2018-02-24 22:04:51 +0000\n\nMiKTex + VS Code + Git = a semi working, compiling, version controlled version of LaTex. Try not to break anything on your journey tho; this worked for me so it will probably work for you\u2026 Heres how to do...\n\n### Some Useful Python Methods For Data Analysis\n\n##### 2018-02-22 12:00:47 +0000\n\nTo get straight to the point, part of my degree includes some data analytics using python; part of trawling the web for learning materials has given me a number of useful methods to use. Unfortunetly these snippits of code never seem to be in the same place, so I\u2019m collating them here: Before I start heres a list of the python libraries mentioned (all available using pip or included in...\n\n### Probability and Inference\n\n##### 2018-01-17 15:15:23 +0000\n\nFor the AI module in the Computer Science department you have to have a basic understanding of Probability and Inference. Below is an introduction to the probability details covered. First off there are a few things we have to cover:...\n\n### Why Most Files Can't Be Compressed\n\n##### 2018-01-14 10:30:20 +0000\n\nThis is an assumption proof given in the Cardiff Uni Maths Coding Theory and Data Compression Course. It makes sense if you understand what we mean by \u201cmost files.\u201d; i.e. literally any random string of data. So, why, in most cases, can\u2019t any old file be compressed? Lets start off at the beginning obvious: let $$A,B$$ be files, and they get compressed to $$C,D$$ respectively. Now $$C=D$$ if and only...\n\n### Setting up external drives for a Plex server on a Raspberry Pi\n\n##### 2017-12-27 23:04:55 +0000\n\nHeadless Plex server on a Raspberry Pi for all your own media to stream anywhere sound interesting? This will be brief because it\u2019s more or a reference for myself. All the files I used are sitting on this github page!...\n\n### Compression Techniques\n\n##### 2017-12-19 12:20:16 +0000\n\nThe Data Compression course covers a variety of compression techniques that must be learned. Some are simple, and some are complicated, but all are not as hard as learning how computers actually work. Lossless Techniques Shannon Coding Possibly the simplest, this is purely for research and isnt really used anywhere. We will start with the following properties: Now we start the steps: Using the probabilities in $$P$$ create the cumulative...\n\n### Backing up a Ghost blog (or anything) on AWS EC2 to S3\n\n##### 2017-12-14 14:06:20 +0000\n\nSo you have a ghost blog(or some other amazon web thing), and you\u2019re on AWS ubuntu (or another linuex type \u252c\u00e1instance) but you need to back it up. It would seem simple that aws should offer you a solution, and...\n\n### Taylor Expansions in PDEs\n\n##### 2017-12-14 12:47:43 +0000\n\nEver wondered what the uses for taylor expansions are in the field of differential equations? no? well you should, its rather facinating\u2026 First, what is a taylor expansion? well, basically, it says if youre trying to evaluate a function at a point that\u2019s \u201cclose enough\u201d to a point you already know you\u2019ll be able to represent this slight difference as an infinate series: Or if youre more comfortable with summation...\n\n### Genetic Algorithms\n\n##### 2017-12-13 11:53:32 +0000\n\nThis Post is a general discussion on how genetic algorithms work and how to model them. Typically GAs are built to solve a single problem, however the concept of genetic improvement can be extended into building functionality too.. (this isn\u2019t...\n\n### Building tobydevlin.com\n\n##### 2017-11-25 20:48:09 +0000\n\nThe tobydevlin.com website is the main product of this large experiment of web design, service building and self tutoring. Understanding web development is pretty crucial to getting a cushy dev job once you graduate so I\u2019m teaching myself \u00ad\u0192\u00f4\u00ea\u00ad\u0192\u00f4\u00ea. Hopefully, if I\u2019m good enough, not only will the main page be up, but this blog will also be around too \u00ad\u0192\u00c6\u00ac. Please note: Products mentioned here are because I like...\n\n### Maths Revision Notes\n\n##### 2017-11-24 16:08:15 +0000\n\nWant to learn some maths? Heres some content on sections of courses I\u2019m taking at Cardiff University. My degree in mathematics means I should be able to masquerade as a clever person for a while. Second Semester Game Theory Coursework...\n\n### Coding Theory - Linear Codes\n\n##### 2017-11-24 16:05:34 +0000\n\nThis will concern mostly the section of linear codes in the course of Coding Theory & Data Compression at Cardiff University. It is expected the reader knows about some sections of coding theory, there isn\u2019t background reading on this blog\u2026 yet\u00ad\u0192\u00f2\u00fa. ___ Things to know to start: The alphabet we will be using is the set $$F_q$$ where $$q$$ is prime. We will regard the vector space $$V(n,q)$$ as the...\n\n### Hi There!\n\n##### 2017-11-22 13:21:30 +0000\n\n\u201cThis is the parliment building in Hungary!\u201d So, after (many) hours of not paying attention to my lecturer, I\u2019ve finally managed to get this ghost thing working. Maybe I\u2019ll write a nice little piece on it in the future. For...","date":"2019-12-16 05:50:46","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3032762408256531, \"perplexity\": 2597.057647219423}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575541317967.94\/warc\/CC-MAIN-20191216041840-20191216065840-00336.warc.gz\"}"}
null
null
Home About UKDN Metal Detecting Books UKDN Magazines UK Clubs Rallies & Days Out Beginners Stuff Dealers Manufacturers Metal Detecting Organisations Hobby Magazines Archive Stuff Archived News Useful Web Sites Detecting Stories Please help us by donating towards server costs. Multiples of £5 are the norm. Thank you @en © Brian Cross & UK DETECTOR NET 2014 Newsletter and free Magazine sign-up Go to Our Forum What Laws Do I Follow? Briefly, all UK laws have to be followed. You are breaking the laws of trespass if, in England and Wales, you search on land without permission. On parkland you run the risk of prosecution for wilfull damage if you dig huge holes. Follow the Code of Conduct and use a small trowel and replace divots neatly. The Treasure Bill came into force on 24th September 1997 and you are obliged to follow it. Write to the Department of National Heritage 2 - 4 Cockspur Street, London SW1y 5DH and ask them to send you a copy of The Treasure Act 1996 Code of Practice (England & Wales), this book is free of charge and will tell you all you need to know. Alternatively you can download a copy of the act by visiting Treasure Act 1996 - The full Act The other law of importance to metal detector users is that concerning scheduled ancient monuments. It is an offence to use a detector on a protected scheduled ancient monument unless permission has been obtained from the Secretary of State for the Environment. When seeking permission from the landowner ensure that you ask if there are any protected sites on his land.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
590
Connecticut Fatherhood Initiative Current: State Representative John S Martinez State Representative John S Martinez Provided by: Connecticut Fatherhood Initiative Representative John S. Martinez was the Deputy Majority Leader serving New Haven's 95th Assembly District. He served as president of the National Hispanic Caucus of State Legislators for two years and also briefly served as the chief executive officer of the Community Action Agency of New Haven. He was especially instrumental in sponsoring Fatherhood Initiative of Connecticut legislation (P.A. 99-193), which was passed by the Legislature in 1999. John knew the importance of a father's interaction with his child and fought to help develop public policies that would benefit the state's fathers and their families. On Friday, October 11, 2002, the State of Connecticut and the Fatherhood Initiative lost a "native son" when State Representative, John S. Martinez' life was taken in a tragic automobile accident. At the time of his death, Representative Martinez was an active member and participant of the Fatherhood Advisory Council. In 2003, honoring his memory and dedication to fathers and their families, legislation (P.A. 03-258) was passed renaming the Fatherhood Initiative after Representative Martinez. John was a very dedicated public servant who always took the time to help others. His strength of character, generosity and compassion for the oppressed is truly missed.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,476
BY MICHAEL F. JACOBSON, PH.D. Updated Nutrition Facts labels with added sugars. But for the addition in 2006 of a new line for trans fat, nutrition labels have not been updated since their introduction in 1993. Science has advanced since then. Science-based updates to Nutrition Facts labels proposed by the Obama Administration put more visual emphasis on calories and added an important new line, and a corresponding Daily Value, for added sugars. That's an important way to help consumers cut their sugar intake. As with menu labeling, Donald Trump delayed the implementation date, even though the new label is increasingly used in the marketplace. We'll be watching carefully to make sure that the sugar and soda lobbies don't use the delay to kill the added-sugars line. and generally weren't clamoring for more delay. Safe food. Outbreaks linked to peanut butter, spinach, and other healthy foods spurred the passage of major legislation in 2010 that reformed the FDA's food safety functions—putting an emphasis on prevention instead of cleaning up after people suffered illnesses. But that progress could be undermined if Trump's budgets don't include sufficient spending for the new inspections required by that law. But the greatest threat to food safety under Donald Trump might be legislation sponsored by the so-called Freedom Caucus in the House of Representatives: the Regulatory Accountability Act is designed to grind down the already-glacial pace of federal rulemaking to a halt. The bill, which we've taken to calling the "Filthy Food Act," would affect every aspect of America's food supply, undermining federal work to prevent bioterrorist attacks on our food sources, inspect meat and eggs for Salmonella, and reduce antibiotic-resistant bacteria in meat and poultry. It's already passed—quietly—in the House of Representatives, so it's worth letting Senators know of your opposition to S.951. Americans deserve to know what they're eating, and we deserve food that is nutritious and safe. There's no reason there shouldn't be a broad, bipartisan consensus for preserving these major advances for nutrition and food safety. Republicans get heart disease, cancer, and food poisoning to the same extent that Democrats and independents do. The reason Obamacare opponents didn't get their "repeal and replace" is because angry constituents made their voices heard—and senators listened. The outcome of these four fights depends on angry eaters speaking out in the same way. Michael F. Jacobson, Ph.D., is co-founder and executive director of the Center for Science in the Public Interest (CSPI), a nonprofit health advocacy organization supported largely by the 600,000 subscribers to its Nutrition Action Health letter, member donations, and foundation grants. CSPI is a key player in battles against obesity, cardiovascular disease, and other health problems, using education, legislation, litigation, and other tactics. He has led CSPI's campaigns on sugar drinks, salt, and trans fat. Jacobson has written numerous books, reports, and scientific papers, including "Six Arguments for a Greener Diet," "Salt: the Forgotten Killer," and "Liquid Candy: How Soft Drinks are Harming Americans' Health." He has received awards such as the 2010 Hero Award from the Centers for Disease Control and Prevention Foundation and the American Public Health Association's 2011 David P. Rall Award for Advocacy in Public Health.
{ "redpajama_set_name": "RedPajamaC4" }
6,213
Q: StringTemplate 4 doesn't seem to work I've only been using StringTemplate 4 for a week, so it's probably something I'm doing, but I don't seem to be able to make the <if> work. I'm using 4.02 (since that's the latest in Maven repository). I have a class called Variable. This is a snippet: class Variable { ... public boolean isArray() { return _bIsArray; } } I have a template that has a line(delimiter is $, $): $if(x.isArray)$ $ArrayAdd(x, className)$ $endif$ If I remove the if and simply let it execute $ArrayAdd(...)$ for everything, the ArrayAdd is clearly executed. I then put the $if$ back in. I also put a print statement in isArray() and isArray() is getting executed and returns false most of the time, but does return true once in a while (for exactly the cases I expected). However, $ArrayAdd never gets executed from within the $if$. I looked at the trace (which I'm not good at reading) and got: declareSetGet:0227: load_local 0 stack=[ ], calls=ObjectClass _sub1 declareSetGet, sp=-1, nw=0 declareSetGet:0230: load_prop #25:"isArray" stack=[ altLocation<CUSTOM>::Array<1>::Custom<altLocationObj> ], calls=ObjectClass _sub1 declareSetGet, sp=0, nw=0 declareSetGet:0233: brf 254 stack=[ null ], calls=ObjectClass _sub1 declareSetGet, sp=0, nw=0 ObjectClass:0121: newline stack=[ ], calls=ObjectClass, sp=-1, nw=959 ObjectClass:0122: write_str #15:"}" stack=[ ], calls=ObjectClass, sp=-1, nw=0 This is one of the cases where I would expect the ArrayAdd template to be executed. Obviously, it doesn't. Can anyone tell me what I'm missing? A: I'm wondering if you should do this: $if(x.array)$ $ArrayAdd(x, className)$ $endif$ Specifically, use x.array instead of x.isArray because the name of the property is "array" and the "is" is just a prefix per the Java Beans convention for boolean property accessors.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,032
Cotton Field Wharf Sawmill Court Smith's Yard Weavers Quay Murrays' Mills One Vesta Street About Manchester Life What Makes us Manchester Life Ancoats Life Manchester Life News The Makers – Manchester's creative talent that's bringing Ancoats and our developments to life. Our city is full of endless creative talent, and we are committed to giving local artists the opportunity to bring creativity to Manchester Life. So far, our developments have incorporated the work of Manchester's finest photographers, illustrators, sculptors, outdoor artists, cabinet makers, and many Here's some highlights from Manchester Life's Makers. We're always exploring ways to bring more creative talent into our developments, so please do get in touch with your ideas here. info@mcrlife.co.uk Andrew is a freelance illustrator based in London. He graduated from the Manchester School of Arts in 2015 with a first-class honours degree in illustration, and his work features on the roof of One Cutting Room Square's carpark in Ancoats. Dave is a freelance illustrator based in Ancoats, but travelling all over for his large scale wall murals. Dave's trademark is his doodle maps which combine his unique style. We're really proud to have commissioned Dave to create a Manchester Life map on the wall of our marketing suite, which has also found its way onto mugs for our residents and the walls of our offices. Lee captures some of the most stunning photography of Ancoats, New Islington and the city, with some of his work on display in Cotton Field Wharf's communal areas. Creator of Sketchbook Design has designed and painted graphics on the outside space of Smith's Yard. The largest piece is Round Are Way It's Alright. 150m2 of graphics inspired by the Oasis track, "Round Are Way". The song talks about community, childhood and the familiarity of an area someone calls home. Author of The New Millenopedia, an exploration of the gaps in our language, and an effort to fill some of them. Illustrations of Rhiannon's words that connect with us are featured throughout Cotton Field Wharf, Sawmill Court and Smith's Yard. We're proudly displaying works from Heart & Sold, the Manchester based organisation, which promotes and sells the artworks of individuals with Down's syndrome. We currently have works on display from Mohamed Dalloul, David Kenward and Andrew Weatherly. information@mcrlife.co.uk Rental homes enquiries: rentals@mcrlife.co.uk info@residemanchester.com Copyright © 2016 Manchester Life Development Company. All rights reserved. Click here for Website Terms Website by Drumbeat
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,703
{"url":"http:\/\/mathhelpforum.com\/discrete-math\/145409-solved-cardinality.html","text":"1. ## [SOLVED] Cardinality\n\nWhich of the following sets has the greatest cardinality?\n\n(A) $\\mathbb{R}$\n\nA: $Card(\\mathbb{R})=c$\n\n(B) The set of all functions from $\\mathbb{Z}\\to\\mathbb{Z}$\n\nB: I think this is countable infinite $(\\mathbb{Z}\\to\\mathbb{Z})\\sim\\mathbb{N}$\n\n(C) The set of all functions from $\\mathbb{R}\\to${ $0,1$}\n\nC: This is the answers but why?\n\n(D) The set of all finite subsets of $\\mathbb{R}$\n\nD: Not sure how to determine cardinality here\n\n(E) The set of all polynomials with coefficients in $\\mathbb{R}$\n\nE: I think this countable infinite too.\n\n2. I'll write something equivalent\n\nOriginally Posted by dwsmith\nWhich of the following sets has the greatest cardinality?\n\n(A) $\\mathbb{R}$\n\nA: $Card(\\mathbb{R})=c$\nRight.\n\n(B) The set of all functions from $\\mathbb{Z}\\to\\mathbb{Z}$\n\nB: I think this is countable infinite $(\\mathbb{Z}\\to\\mathbb{Z})\\sim\\mathbb{N}$\nThis is surely not countable. Not even $\\{0,1\\}^\\omega$ is countable. In fact, $\\mathbb{Z}^\\mathbb{Z}\\simeq\\mathbb{R}$\n\n(C) The set of all functions from $\\mathbb{R}\\to${ $0,1$}\n\nC: This is the answers but why?\nIn general $\\{0,1\\}^A\\simeq\\mathcal{P}(A)$\n\n(D) The set of all finite subsets of $\\mathbb{R}$\n\nD: Not sure how to determine cardinality here\nThis is equipotent to the reals. To see this let $\\mathcal{R}_n=\\left\\{E\\subseteq\\mathbb{R}:\\text{ca rd }E=n\\right\\}$ then clearly $\\text{card }\\mathbb{R}\\leqslant \\text{card }\\mathcal{R}_n\\leqslant\\text{card }\\mathbb{R}^n=\\mathbb{R}$ so that $\\mathcal{R}_n\\simeq\\mathbb{R}$. So, then your set, call it $W$, can be written as $\\bigcup_{n\\in\\mathbb{N}}\\mathcal{R}_n$ and there's a theorem which says that if the indexing set is countable and all of the sets being united have the same cardinality which is greater than $\\aleph_0$ then the union has the same cardinality as each set in the union.\n\n(E) The set of all polynomials with coefficients in $\\mathbb{R}$\n\nE: I think this countable infinite too.\nSurely not isn't $f(x)=a,\\text{ }a\\in\\mathbb{R}$ a subset of this and isn't that clearly equipotent to $\\mathbb{R}$? In fact, since a polynomial is completely determined by it's coefficients this set is equipotent to that described in D)\n\n3. The set $\\mathbb{Q}$ is the set of rationals and the set $\\mathcal{P}(\\mathbb{Q})$ is the power set.\nDefine $f:\\mathbb{R}\\to \\mathcal{P}(\\mathbb{Q})$ as $a\\mapsto \\{x\\in \\mathbb{Q}:x.\nCan you show that $f$ is an injection?\nIf so then $card(\\mathcal{P}(\\mathbb{N}) )= card(\\mathcal{P}(\\mathbb{Q}))\n\n4. Originally Posted by dwsmith\nWhich of the following sets has the greatest cardinality?\n\nAll but (c) have the same cardinality $2^{\\aleph_0}=c$\n\n(A) $\\mathbb{R}$\n\nA: $Card(\\mathbb{R})=c$\n\n(B) The set of all functions from $\\mathbb{Z}\\to\\mathbb{Z}$\n\nB: I think this is countable infinite $(\\mathbb{Z}\\to\\mathbb{Z})\\sim\\mathbb{N}$\n\nIts cardinality is, by definition, $|\\mathbb{Z}|^{|\\mathbb{Z}|}=\\aleph_0^{\\aleph_0}=c$\n\n(C) The set of all functions from $\\mathbb{R}\\to${ $0,1$}\n\nC: This is the answers but why?\n\nIts cardinality is $|(0,1)|^{|\\mathbb{R}|}=c^c>c$\n\n(D) The set of all finite subsets of $\\mathbb{R}$\n\nD: Not sure how to determine cardinality here\n\nFor every $n\\in\\mathbb{N}\\,\\,\\exists\\,c=2^{\\aleph_0}$ subsets of $\\mathbb{R}$ with $n$ elements, so the set we're dealing with has cardinality $c\\cdot \\aleph_0=c$\n\n(E) The set of all polynomials with coefficients in $\\mathbb{R}$\n\nE: I think this countable infinite too.\n\nThis last one is very similar to the previous set so I'll let it to you\n\nTonio\n\n5. Originally Posted by Drexel28\nIn general $\\{0,1\\}^A\\simeq\\mathcal{P}(A)$\n\nIs the A here referring to (A) which is the reals?\n\n6. Originally Posted by dwsmith\nIs the A here referring to (A) which is the reals?\nNo. For any set $A$ we have that $\\{0,1\\}^A=\\left\\{f:A\\to\\{0,1\\}\\right\\}$ is equipotent to $\\mathcal{P}(A)$","date":"2017-01-21 08:40:50","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 52, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9532612562179565, \"perplexity\": 393.3889378010811}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-04\/segments\/1484560281001.53\/warc\/CC-MAIN-20170116095121-00493-ip-10-171-10-70.ec2.internal.warc.gz\"}"}
null
null
namespace bench { namespace tests { namespace mongodb { namespace blob { class get : public mongodb_test_template<get> { public: explicit get(bench::test_config config) : mongodb_test_template(config) { } void setup() override { mongodb_test_template::setup(); _mongodb.blob_put(alias(0), content(0)); } void run_iteration(std::uint32_t iteration) { _mongodb.blob_get(alias(0)); } void cleanup() override { _mongodb.remove(alias(0)); } static std::string name() { return "mongodb_blob_get"; } static std::string description() { return "Each thread repeats mongodb.query on a single entry"; } static bool size_dependent() { return true; } }; } // namespace blob } // namespace mongodb } // namespace tests } // namespace bench
{ "redpajama_set_name": "RedPajamaGithub" }
8,682
Rhadine myrmecodes is een keversoort uit de familie van de loopkevers (Carabidae). De wetenschappelijke naam van de soort is voor het eerst geldig gepubliceerd in 1892 door G.horn. myrmecodes
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,808
\section{Introduction} One of the major bottlenecks for deploying modern machine learning models in real-world applications is the need for substantial amounts of manually-labeled training data. Unfortunately, obtaining such manual annotations is typically time-consuming and labor-intensive, prone to human errors and biases, and difficult to keep updated in response to changing operating conditions. To reduce the efforts of annotation, recent weak supervision (WS) frameworks have been proposed which focus on enabling users to leverage a diversity of weaker, often programmatic supervision sources~\cite{ratner2017snorkel, Ratner16, Ratner19} to label and manage training data in an efficient way. Recently, WS has been widely applied to various machine learning tasks in a diversity of domains: scene graph prediction~\cite{chen2019scene}, video analysis~\cite{fu2019rekall, Varma2019multi}, image classification~\cite{das2020goggles}, image segmentation~\cite{hooper2020cut}, autonomous driving~\cite{Weng2019UtilizingWS}, relation extraction~\cite{Jia2021HeterogeneousGN,zhou2020nero,liu2017heterogeneous}, named entity recognition~\cite{safranchik2020weakly,lison2020named,li2021bertifying, lan2020connet, DBLP:conf/naacl/GoelORVR21}, text classification~\cite{ren2020denoising, yu-etal-2021-fine,shu2020learning,shu2020leveraging}, dialogue system~\cite{DBLP:conf/aaai/MallinarSUGGHLZ19}, biomedical~\cite{Kuleshov2019AMD,fries2017swellshark,Mallory2020ExtractingCR}, healthcare~\cite{Fries2021OntologydrivenWS,DBLP:journals/patterns/DunnmonRSKMSGLL20,Fries2019WeaklySC,DBLP:conf/miccai/SaabDGRSRR19,Wang2019ACT,Saab2020WeakSA}, software engineering~\cite{rao2021search}, sensors data~\cite{furst2020transport,khattar2019multi}, E-commerce~\cite{mathewdefraudnet,zhang2021queaco}, and multi-agent systems~\cite{DBLP:conf/iclr/ZhanZYSL19}. \begin{figure*}[t] \centering \includegraphics[width=0.8\columnwidth]{figures/overview.pdf} \vspace{-2mm} \caption{An overview of WS pipeline.} \label{fig:overview} \end{figure*} In a WS approach, users leverage \emph{weak supervision sources}, \eg, heuristics, knowledge bases, and pre-trained models, instead of manually-labeled training data. In this paper, we use the \textit{data programming} formalism~\cite{Ratner16} which abstracts these weak supervision sources as \emph{labeling functions}, which are user-defined programs that each provides labels for some subset of the data, collectively generating a large but potentially overlapping set of votes on training labels. The labeling functions may have varying error rates and may generate conflicting labels on certain data points. To address these issues, researchers have developed modeling techniques which aggregate the noisy votes of labeling functions to produce training labels (often referred to as a \textit{label model})~\cite{Ratner16, Ratner19, fu2020fast, Varma2019multi}, which often build on prior work in modeling noisy crowd-worker labels, e.g.~\cite{DawidSkene}. Then, these training labels (often confidence-weighted or probabilistic) are in turn used to train an \emph{end model} which can generalize beyond the labels for downstream tasks. These two-stage methods mainly focus on the efficiency and effectiveness of the label model, while maintaining the maximal flexibility of the end model. Recent approaches have also focused on integrating semi- or self-supervised approaches~\cite{yu-etal-2021-fine}; we view these as modified end models in our benchmarking framework. In addition to these two-stage methods, researchers have also explored the possibility of coupling the label model and the end model in an end-to-end manner~\cite{ren2020denoising, lan2020connet, karamanolakis2021self}. We refer to these one-stage methods as \emph{joint models}. An overview of WS pipeline can be found in Fig.\ref{fig:overview}. Despite the increasing adoption of WS approaches, a common benchmark platform is still missing, leading to an evaluation space that is currently rife with custom and/or private datasets, weak supervision sources that are highly varied and in often hidden and uncontrolled ways, and basic evaluation protocols that are highly variable. Several thematic issues are widespread in the space: \begin{itemize}[leftmargin=*] \item \textbf{Private and/or custom datasets:} Due to the lack of standardized benchmark datasets, researchers often construct their own datasets for comparison. In particular, WS approaches are often practically motivated by real-world use cases where labeled data is difficult to generate, resulting in datasets are often based on real production need and therefore are not released due to privacy issues. \item \textbf{Hidden weak supervision source variance:} Unlike traditional supervised learning problems, WS datasets vary not just in the unlabeled data $X$, but also crucially in the labels $Y$ and weak supervision sources they derive from. This latter degree of variance has a major effect on the performance of WS approaches; however it is often poorly documented and controlled for. For example, it is not uncommon to have two datasets with \textit{completely different weak supervision sources} bear the exact same name (usually deriving from the source of the unlabeled data $X$) in experimental results, despite being entirely different datasets from a WS perspective. \item \textbf{End-to-end evaluation protocol:} WS approaches involve more complex (e.g. two-stage) pipelines, requiring greater (yet often absent) care to normalize and control evaluations. For example, it is not uncommon to see significant variance in which stage of a two-stage pipeline performance numbers are reported for, what type of training labels are produced, etc. \end{itemize} To address these issues and contribute a resource to the growing WS community, we developed \textbf{W}eak Supe\textbf{r}vision B\textbf{ench}mark (\textsc{Wrench}\xspace), a benchmark platform for WS with $22$ diverse datasets from the literature, a range of standardized real, synthetic, and procedurally generated weak supervision sources, and a modular, extendable framework for execution and evaluation of WS approaches, along with initial implementations of recent popular WS methods. \textsc{Wrench}\xspace includes: \begin{itemize}[leftmargin=*] \item A diverse (and easily extensible) set of 22 real-world datasets for two canonical, annotation-intensive machine learning problems, classification and sequence tagging, including datasets used in existing WS studies and new ones we contribute. \item A range of real (user-generated) weak supervision sources, and new synthetic and procedural weak supervision source generators, enabling systematic study on the effect of different supervision source types on the performances of WS methods, e.g. with respect to accuracy, variance, sparsity, conflict and overlap, correlation, and more. \item A modular, extensible Python codebase for standardization of implementation, evaluation, and ablation of WS methods, including standardized evaluation scripts for prescribed metrics, unified interfaces for publicly available methods, and re-implementations of some other popular ones. \end{itemize} To demonstrate the utility of \textsc{Wrench}\xspace, we analyze the effect of a range of weak supervision attributes using \textsc{Wrench}\xspace's procedural weak supervision generation suite, illustrating the effect of various salient factors on WS method efficacy (Sec.~\ref{sec:generator}). We also conduct extensive experiments to render a comprehensive evaluation of popular WS methods (Sec.~\ref{sec:exp}), exploring more than $100$ compared methods and their variants (83 for classification and 26 for sequence tagging). We plan to \emph{continuously update} \textsc{Wrench}\xspace with more datasets and methods as the field advances. We welcome contributions and expect the scope and breadth of \textsc{Wrench}\xspace to increase over time. \section{Related Work} \paragraph{Weak Supervision.} Weak supervision builds on many previous approaches in machine learning, such as distant supervision \cite{mintz2009distant,Hoffmann2011KnowledgeBasedWS,Takamatsu2012ReducingWL}, crowdsourcing \cite{Gao2011HarnessingTC,Krishna2016VisualGC}, co-training methods \cite{Blum1998CombiningLA}, pattern-based supervision \cite{Gupta2014ImprovedPL}, and feature annotation \cite{Mann2010GeneralizedEC,Zaidan2008ModelingAA}. Specifically, weak supervision methods take multiple noisy supervision sources and an unlabeled dataset as input, aiming to generate training labels to train an end model (two-stage method) or directly produce the end model for the downstream task (one-stage method) without any manual annotation. \vspace{-2mm} \paragraph{Weak Supervision for Classification.} Classification is one of the fundamental machine learning problems. Its extensive dependency on manual annotations makes it suitable for weak supervision. The label models for classification are mainly probabilistic graphical models~\cite{Ratner16, Ratner19, fu2020fast} with different parameter estimation techniques, \eg, SGD~\cite{Ratner16}, matrix completion~\cite{Ratner19} or triplet method~\cite{fu2020fast}. On the other hand, \cite{yu-etal-2021-fine} proposes to improve the end model by leveraging uncovered data in a self-training framework. In addition, one-stage methods~\cite{ren2020denoising} combine the noisy sources aggregation and the end model training in a joint framework. \vspace{-2mm} \paragraph{Weak Supervision for Sequence Tagging.} Sequence tagging from multiple weak supervision sources is more challenging as there exists an internal dependency relationship between token-level labels. To tackle this challenge, Hidden Markov Models (HMMs) have been adopted for label denoising. ~\cite{lison2020named,nguyen2017aggregating} use a standard HMM with multiple observed variables, each from one labeling source. \cite{safranchik2020weakly} improves HMM by introducing unique linking rules as an additional supervision source; \cite{li2021bertifying} predicts token-wise transition and emission probabilities from BERT embeddings to utilize the context information. Besides, \cite{lan2020connet} is an one-stage method that models each labeling source by a CRF layer and aggregates their transitions with an attention network. \vspace{-2mm} \paragraph{Weak Supervision Sources Generation.} To further reduce the efforts of designing supervision sources, many works propose to generate supervision sources automatically. Snuba~\cite{varma2018snuba} generates heuristics based on a small set of labeled datasets. \cite{boecking2021interactive} and \cite{darwin} interactively generate labeling functions based on user feedback. TALLOR~\cite{TALLOR} and GLaRA~\cite{glara} automatically augment an initial set of labeling functions with new ones. Different from existing works that optimize the task performance, the procedural labeling function generators in \textsc{Wrench}\xspace facilitate the study of the impact of different weak supervision sources. Therefore, we assume access to a fully-labeled dataset and generate diverse types of weak supervision sources. \vspace{-2mm} \paragraph{The Scope of this Benchmark.} We are aware that there are numerous works on learning with \emph{noisy} or \emph{distantly labeled} data for various tasks, including relation extraction~\cite{luo2017learning,mintz2009distant,Takamatsu2012ReducingWL}, sequence taggin ~\cite{liang2020bond,liu2021noisy,peng2019distantly,autoner}, image classification~\cite{co_teaching,li2021mopro,mirzasoleiman2020coresets} and visual relation detection~\cite{yao2021visual,Zhang_2017_ICCV}. There are also several benchmarks targeting on this topic~\cite{hedderich2021analysing,pmlrjiang20c,riedel2010modeling,Xiao_2015_CVPR,chu2021natcat} with different noise levels and patterns. However, these studies mainly concentrate on learning with \emph{single-source} noisy labels and cannot leverage complementary information from multiple annotation sources in weak supervision. Separately, there are several works~\cite{Awasthi2020Learning,karamanolakis2021self,maheshwari2021semi, mazzetto:aistats21, mazzetto:icml21} leveraging additional clean, labeled data for denoising multiple weak supervision sources, while our focus is on benchmarking weak supervision methods that do not require any labeled data. So we currently do not include these methods in \textsc{Wrench}\xspace, that being said, we plan to gradually incorporate them in the future. \section{Background: Weak Supervision} \label{sec:background} We first give some background on weak supervision (WS) at a high level. In the WS paradigm, multiple weak supervision sources are provided which assign labels to data, which may be inaccurate, correlated, or otherwise noisy. The goal of a WS approach is the same as in supervised learning: to train an \textit{end} model based on the data and weak supervision source labels. This can be broken up into a \textit{two-stage} approach–separating the integration and modeling of WS from the training of the end model–or tackled jointly as a \textit{one-stage} approach. \subsection{Problem Setup} \label{sec:overview_problem} We more formally define the setting of WS here. We are given a dataset containing $n$ data points $\bm{X}=[X_1, X_2, \ldots, X_n]$ with $i$-th data point denoted by $X_i \in \mathcal{X}$. Let $m$ be the number of WS sources $\{S_j\}_{j=1}^m$, each assigning a label $\lambda_j \in \mathcal{Y}$ to $X_i$ to vote on its respective $Y_i$ or abstaining ($\lambda_j=-1$). We define the \emph{propensity} of one source $S_j$ as $p(\lambda_j \neq -1)$. For concreteness, we follow the general convention of WS~\cite{Ratner16} and refer to these sources as \emph{labeling functions} (LFs) throughout the paper. In \textsc{Wrench}\xspace, we focus on two major machine learning tasks: \paragraph{Classification:} for each $X_i$, there is an unobserved true label denoted by $Y_i \in \mathcal{Y}$. A label matrix $L\in\mathbb{R}^{n \times m}$ is obtained via applying $m$ LFs to the dataset $\bm{X}=[X_1, X_2, \ldots, X_n]$. We seek to build an end model $f_w: \mathcal{X} \rightarrow \mathcal{Y}$ to infer the labels $\hat{Y}$ for each $X \in \bm{X}$. \paragraph{Sequence tagging:} each $X_i\in \bm{X}$ is a sequence of tokens $[x_{i,1}, x_{i,2}, \ldots, x_{i,t}]$, where $t$ is the length of $X_i$, with an unobserved true label list denoted by $Y_i = [y_{i,1}, y_{i,2}, \ldots, y_{i,t}]$ where $y_{i,j} \in \mathcal{Y}$. For each sequence $X_i$ with its associated label matrix $L_i\in\mathbb{R}^{n \times t}$, we aim to produce an sequence tagger model $f_w: \mathcal{X} \rightarrow \mathcal{Y}$ which infers labels $\hat{Y} = [\hat{y}_1, \hat{y}_1, \ldots, \hat{y}_t]$ for each sequence. It is worth noting that, different from the \textit{semi-supervised} setting, and some recent WS work, where some ground-truth labeled data is available~\cite{Awasthi2020Learning,maheshwari2021semi,karamanolakis2021self,mazzetto:icml21, mazzetto:aistats21}, we consider the setting where we train the end model \emph{without observing any ground truth training labels}. However, we note that \textsc{Wrench}\xspace can be extended in future work to accommodate these settings as well. \subsection{Two-stage Method} Two-stage methods usually decouple the process of training label models and end models. In the first stage, a \textit{label model} is used to combine the label matrix $L$ with either probabilistic \textit{soft labels} or one-hot \textit{hard labels}, which are in turn used to train the desired \emph{end model} in the second stage. Most studies focus on developing label models while leaving the end model flexible to the downstream tasks. Existing label models include Majority Voting (MV), Probabilistic Graphical Models (PGM) \cite{DawidSkene, Ratner16,Ratner19, fu2020fast,lison2020named,safranchik2020weakly,li2021bertifying}, \etc. Note that prior crowd-worker modeling work can be included and subsumed by this set of approaches, e.g.~\cite{DawidSkene}. \subsection{One-stage Method} One-stage methods attempt to effectively train a label model and end model simultaneously~\cite{ren2020denoising,lan2020connet}. Specifically, they usually design a neural network for aggregating the prediction of labeling functions while utilizing another neural network for final prediction. We refer to the model designed for one-stage methods as a \emph{joint model}. \section{Wrench Benchmark Platform} \begin{table}[t] \begin{minipage}[b]{0.56\linewidth} \centering \caption{Statistics of all the tasks, domains and datasets included in \textsc{Wrench}\xspace.} \scalebox{0.5}{ \begin{tabular}{ l l l c c c c c } \toprule \multicolumn{5}{c}{} & \multicolumn{1}{c}{\textbf{Train}} & \multicolumn{1}{c}{\textbf{Dev}} & \multicolumn{1}{c}{\textbf{Test}} \\ \cmidrule(lr){6-6} \cmidrule(lr){7-7} \cmidrule(lr){8-8} \textbf{Task ($\downarrow$)} &\textbf{Domain ($\downarrow$)} & \textbf{Dataset ($\downarrow$)} & \textbf{\#Label} & \textbf{\#LF} &\textbf{\#Data} & \textbf{\#Data} & \textbf{\#Data} \\ \midrule \multirow{1}{*}{Income Class.} & Tabular Data & Census~\cite{kohavi1996scaling, Awasthi2020Learning} & 2 & 83 & 10,083 & 5,561 & 16,281 \\\midrule[0.05pt] \midrule[0.05pt] \multirow{2}{*}{Sentiment Class.} & Movie & IMDb~\cite{IMDB,ren2020denoising} & 2 & 5 & 20,000 & 2,500 & 2,500 \\ & Review & Yelp~\cite{AGNews,ren2020denoising} & 2 & 8 & 30,400 & 3,800 & 3,800 \\\midrule[0.05pt] \midrule[0.05pt] \multirow{2}{*}{Spam Class. } & Review & Youtube~\cite{youtube} & 2 & 10 & 1,586 & 120 & 250 \\ & Text Message & SMS~\cite{sms, Awasthi2020Learning} & 2 & 73 & 4,571 & 500 & 500 \\\midrule[0.05pt] \midrule[0.05pt] \multirow{1}{*}{Topic Class.} & News & AGNews~\cite{AGNews,ren2020denoising} & 4 & 9 & 96,000 & 12,000 & 12,000 \\\midrule[0.05pt] \midrule[0.05pt] \multirow{1}{*}{Question Class.} & Web Query & TREC~\cite{trec, Awasthi2020Learning} & 6 & 68 & 4,965 & 500 & 500 \\\midrule[0.05pt] \midrule[0.05pt] \multirow{4}{*}{Relation Class.} & News & Spouse~\cite{spouse, ratner2017snorkel} & 2 & 9 & 22,254 & 2,811 & 2,701 \\ & Biomedical & CDR~\cite{davis2017comparative, ratner2017snorkel} & 2 & 33 & 8,430 & 920 & 4,673 \\ & Web Text & SemEval~\cite{hendrickx2010semeval, zhou2020nero} & 9 & 164 & 1,749 & 200 & 692 \\ & Chemical & ChemProt~\cite{chemprot,yu-etal-2021-fine} & 10 & 26 & 12,861 & 1,607 & 1,607 \\\midrule[0.05pt] \midrule[0.05pt] \multirow{3}{*}{Image Class.} & \multirow{3}{*}{Video} & Commercial~\cite{fu2020fast} & 2 & 4 & 64,130 & 9,479 & 7,496 \\ & & Tennis Rally~\cite{fu2020fast} & 2 & 6 & 6,959 & 746 & 1,098 \\ & & Basketball~\cite{fu2020fast} & 2 & 4 & 17,970 & 1,064 & 1,222 \\\midrule[0.05pt] \midrule[0.05pt] \multirow{8}{*}{Sequence Tagging} &News & CoNLL-03~\cite{conll03,lison2020named} & 4 & 16 & 14,041 & 3250 & 3453 \\\cmidrule(lr){2-8} & \multirow{2}{*}{Web Text} & WikiGold~\cite{wikigold,lison2020named} & 4 & 16 & 1,355 & 169 & 170 \\ & & OntoNotes 5.0~\cite{weischedel2011ontonotes} & 18 & 17 & 115,812 & 5,000 & 22,897\\ \cmidrule(lr){2-8} & \multirow{2}{*}{Biomedical} & BC5CDR~\cite{cdr,li2021bertifying} & 2 & 9 & 500 & 500 & 500 \\ & & NCBI-Disease~\cite{dougan2014ncbi,li2021bertifying} & 1 & 5 & 592 & 99 & 99 \\\cmidrule(lr){2-8} & \multirow{2}{*}{Review} & Laptop-Review~\cite{laptop,li2021bertifying} & 1 & 3 & 2,436 & 609 & 800 \\ & & MIT-Restaurant~\cite{mitr} & 8 & 16 & 7,159 & 500 & 1,521 \\\cmidrule(lr){2-8} & Movie & MIT-Movies~\cite{mitmovie} & 12 & 7 & 9,241 & 500 & 2,441 \\ \bottomrule \end{tabular} } \label{tab:dataset_stats} \end{minipage}\hfill \begin{minipage}[b]{0.4\linewidth} \centering \includegraphics[width=1.0\textwidth]{figures/stats_all_box.png} \captionof{figure}{\textbf{Box plots}: The coverage, overlap, conflict and accuracy of LFs in collected datasets. We can see the LFs have diverse properties across datasets.} \label{fig:dataset_stats_box} \end{minipage} \end{table} We propose the first benchmark platform, \textsc{Wrench}\xspace, for weak supervision (WS). Specifically, {\textsc{Wrench}\xspace} includes the following components: \paragraph{A collection of 22 real-world datasets.} We collect 22 publicly available real-world datasets and the corresponding user-provided LFs from the literature. The statistics of the datasets is in Table~\ref{tab:dataset_stats}. The datasets cover a wide range of topics, including both generic domains such as web text, news, videos and specialized ones including biomedical and chemical publications. The corresponding LFs have various forms, such as key words~\cite{autoner}, regular expressions~\cite{Awasthi2020Learning}, knowledge bases~\cite{liang2020bond} and human-provided rules~\cite{fu2020fast}. Some relevant statistics of the LFs is in Fig.~\ref{fig:dataset_stats_box}; the box plots demonstrate that the LFs have diverse properties across datasets, enabling more thorough comparisons among WS approaches. The description of each dataset and detailed statistics are in App.~\ref{sec:dataset}. \paragraph{A range of procedural labeling function generators.} In addition to the manually-created LFs coupled with each dataset, \textsc{Wrench}\xspace provides a range of procedural labeling function generators for the first time, giving users fine-grain control over the space of weak supervision sources. It facilitates researchers to evaluate and diagnose WS methods on (1) synthetic datasets or (2) real-world datasets with procedurally generated LFs. Based on the generators, users could study the relationship between different weak supervision sources and WS method performances. The details of the generators and the provided studies are in Sec.~\ref{sec:generator}. Notably, the unified interface of \textsc{Wrench}\xspace allows users to add more generators covering new types of LFs easily. \paragraph{Abundant baseline methods and extensive comparisons.} \textsc{Wrench}\xspace provides unified interfaces for a range of publicly available and popular methods. A summary of models currently included in \textsc{Wrench}\xspace is in Table~\ref{tab:methods}. With careful modularization, users could pick \emph{any} label model and end model to form a two-stage WS method, while also choosing to use soft or hard labels for training the end model, leading to more than 100 method variants. We conduct extensive experiments to offer a systematic comparison over all the models and possible variants on the collected 22 datasets (Sec. ~\ref{sec:exp}). Another benefit of this modularity is that other approaches can be easily contributed, and we plan to add more models in the future. \begin{table*}[t] \centering \caption{The initial set of methods included in \textsc{Wrench}\xspace. A brief introduction of each method can be found in App.~\ref{sec:methods}. We plan to add more methods in near future.} \scalebox{0.6}{ \setlength{\tabcolsep}{2em} \begin{tabular}{ l l l l } \toprule \textbf{Task} & \textbf{Module} & \textbf{Method} &\textbf{Abbr.} \\ \midrule \multirow{14}{*}{Classification} & \multirow{6}{*}{Label Model} & Majority Voting & {MV} \\ & & Weighted Majority Voting & {WMV} \\ & & Dawid-Skene~\cite{DawidSkene} & {DS} \\ & & Data Progamming~\cite{Ratner16} & {DP} \\ & & MeTaL~\cite{Ratner19} & {MeTaL} \\ & & FlyingSquid~\cite{fu2020fast} & {FS} \\ \cmidrule(lr){2-4} & \multirow{5}{*}{End Model} & Logistic Regression & {LR} \\ & & Multi-Layer Perceptron Neural Network & {MLP} \\ & & {BERT~\cite{devlin2019bert}} & B \\ & & {RoBERTa~\cite{liu2019roberta}} & R \\ & & {COSINE-BERT~\cite{yu-etal-2021-fine}} & BC \\ & & {COSINE-RoBERTa~\cite{yu-etal-2021-fine}} & RC \\ \cmidrule(lr){2-4} & \multirow{1}{*}{Joint Model} & {Denoise~\cite{ren2020denoising}} & {Denoise} \\\midrule[0.05pt] \midrule[0.05pt] \multirow{6}{*}{Sequence Tagging} & \multirow{2}{*}{Label Model} & Hidden Markov Model~\cite{lison2020named} & {HMM} \\ & & Conditional Hidden Markov Model~\cite{li2021bertifying} & {CHMM} \\ \cmidrule(lr){2-4} & \multirow{2}{*}{End Model} & LSTM-CNNs-CRF~\cite{ma2016end} & {LSTM-CNNs-CRF} \\ & & BERT~\cite{devlin2019bert} & {BERT} \\ \cmidrule(lr){2-4} & \multirow{1}{*}{Joint Model} & Consensus Network~\cite{lan2020connet} & {ConNet} \\ \bottomrule \end{tabular} } \label{tab:methods} \end{table*} \section{Labeling Function Generators} \label{sec:generator} In addition to user-generated labeling functions collected as part of the 22 benchmark datasets in \textsc{Wrench}\xspace, we provide two types of weak supervision source generators in \textsc{Wrench}\xspace in order to enable fine-grain exploration of WS method efficacy across different types of weak supervision: (1) \textit{synthetic} labeling function generators, which directly generate labels from simple generative label models; (2) \textit{procedural} labeling function generators, which automatically generate different varieties of real labeling functions given an input labeled dataset. In this section, we introduce the generators in detail and provide some sample studies to demonstrate the efficacy of these generators in enabling finer-grain exploration of the relationship between weak supervision source qualities and WS method performances. For simplicity, in this section we constrain our study to binary classification tasks and the details on implementation and parameter can be found in App.~\ref{sec:para_study}. \subsection{Synthetic Labeling Function Generator} The synthetic labeling function generators are independent of the input data $X$; instead, they directly generate the labeling function output labels. We provide one initial synthetic generator to start in \textsc{Wrench}\xspace's first release, which generates labels according to a classic model where the LF outputs are conditionally independent given the unseen true label $Y$~\cite{DawidSkene, Ratner16}. For this model, \textsc{Wrench}\xspace provides users with control over two important dimensions: accuracy and propensity. In addition, users can control the \emph{variance} of the LF accuracies and propensities via the respective \emph{radius} of accuracy and propensity parameters. For example, the accuracy of each LF could be chosen to be uniformly sampled from $[a-b, a+b]$, where $a$ is the mean accuracy and $b$ is the radius of accuracy, resulting in a variance of $\frac{b^2}{12}$. We construct the synthetic label generators to be extensible, for example, to include more controllable parameters and more complex models, e.g. relaxing the conditional independence assumption. Based on this generator, we study different dimensions of LFs and found that the comparative performance of label models are largely dependent on the variance of accuracy and propensity of LFs. First, we fix other dimensions and vary the radius of LF's accuracy, and generate $\bm{Y}$ and LFs for binary classification. As shown in Fig.~\ref{fig:syn}(a), we can see that the performance of label models diverge when we increase the variance of LFs' accuracy by increasing the radius of accuracy. Secondly, we vary the propensity of LFs. From the curves in Fig.~\ref{fig:syn}(b), we can see that if we increase the propensity of LFs, the label models' performance keep increasing and converge eventually, while when the propensity is lower, the label models perform differently. These observations indicate the importance of the dimensions of LFs, which could lead to the distinct comparative performance of label models. \begin{figure}[t] \begin{minipage}{.45\linewidth} \centering \subfloat[]{\label{main:a}\includegraphics[scale=.2]{figures/acc_var.png}} \end{minipage}% \begin{minipage}{.45\linewidth} \centering \subfloat[]{\label{main:b}\includegraphics[scale=.2]{figures/propensity.png}} \end{minipage} \caption{Label models performance (AUC) on synthetic LFs with varying (a) radius of LF's accuracy and (b) propensity. We can see that when the radius of LF's accuracy is large or the propensity of LFs is small, the label model performance are more divergent.} \label{fig:syn} \end{figure} \subsection{Procedural Labeling Function Generator} The procedural labeling function generator class in \textsc{Wrench}\xspace requires the input of a labeled dataset $(X, Y)$, i.e. with data features and ground truth training labels. The procedural generators create a pool of \emph{candidate LFs} based on a given \textit{feature lexicon}. Each candidate LF $S$ consists of a single or composite feature from the provided lexicon and a label. The final set of generated LFs is those candidate LFs whose individual parameters (e.g. accuracies and propensities) and group parameters (e.g. correlation and data-dependency) meet user-provided thresholds. These procedurally-generated LFs mimic the form of user-provided ones, but enable fine-grain control over the LF attributes. In this section, we provide an example study of how label models perform with different types of LFs on two real-world text classification datasets, Yelp and Youtube, to demonstrate the utility of these procedural generators. For simplicity, we adopt ($n$, $m$)-gram features, where $n$ and $m$ are the minimum and maximum length of gram respectively and are input by users. Specifically, a candidate LF $S$ consists of one label value $y$ and an ($n$, $m$)-gram feature $f$; for each data point, if the feature $f$ exists, then $S$ assigns label $y$, otherwise returns abstain ($\lambda=-1$). We generate and examine three sets of LFs, namely, the LFs with highest (1) accuracies, (2) pairwise correlations and (3) data-dependencies (Fig.~\ref{fig:semi}). For (2), the correlations of LFs are measured by the \emph{conditional mutual information} of a pair of candidate LFs given the ground truth labels $\bm{Y}$. We are interested in (2) because existing works often assume the LFs are independent conditional on $\bm{Y}$~\cite{Ratner16, Bach2017LearningTS}, however, users can hardly generate perfectly conditionally independent LFs; therefore, it is of great importance to study how label models perform when LFs are not conditionally independent. The reason for (3) is that previous studies typically assume the LFs are \emph{uniformly} accurate across the dataset~\cite{Ratner16, Bach2017LearningTS, Ratner19, fu2020fast}, however, in practice, this is another often violated assumption–e.g. specific LFs are often more accurate on some subset of data than the other. Thus, we measure the data-dependency of LFs by the variance of accuracies of LF over clusters of data and pick LFs with the highest data dependency. The results are in Fig.~\ref{fig:semi}. First, in the case of top-k accurate LFs (Fig.~\ref{fig:semi}(a)\&(d)), the label models perform similarly, however, for the other two types of LFs, there are large gaps between label model performance and the superiority of recently-proposed methods, \ie, DP, MeTaL, FS, can be clearly seen. Secondly, even within the same type of LFs, one label model can result in varying performance on different datasets; for example, when correlated LFs are generated (Fig.~\ref{fig:semi}(b)\&(e)), the DS model performs much better on Yelp than Youtube compared to the MV model. These observations further confirm that the LFs have a major effect on the efficacy of different WS approaches, and it is critical to provide a benchmark suite for WS with varied datasets and varying types of LFs. \begin{figure*}[t] \begin{minipage}{.33\linewidth} \centering \subfloat[]{\label{main:a}\includegraphics[scale=.18]{figures/Youtube_Top-K_Accurate.png}} \end{minipage}% \begin{minipage}{.33\linewidth} \centering \subfloat[]{\label{main:b}\includegraphics[scale=.18]{figures/Youtube_Top-K_Correlated.png}} \end{minipage}% \begin{minipage}{.33\linewidth} \centering \subfloat[]{\label{main:c}\includegraphics[scale=.18]{figures/Youtube_Top-K_Data-Dependent.png}} \end{minipage} \begin{minipage}{.33\linewidth} \centering \subfloat[]{\label{main:d}\includegraphics[scale=.18]{figures/Yelp_Top-K_Accurate.png}} \end{minipage}% \begin{minipage}{.33\linewidth} \centering \subfloat[]{\label{main:e}\includegraphics[scale=.18]{figures/Yelp_Top-K_Correlated.png}} \end{minipage}% \begin{minipage}{.33\linewidth} \centering \subfloat[]{\label{main:f}\includegraphics[scale=.18]{figures/Yelp_Top-K_Data-Dependent.png}} \end{minipage} \caption{Label models performance (AUC) on Youtube and Yelp with varying types of procedural LFs, namely, top-k accurate, correlated, or data-dependent LFs. We can see that the dependency properties of LF (correlated and data-dependent) have a major effect on the comparative performance of label models.} \label{fig:semi} \end{figure*} \section{Benchmark Experiments} \label{sec:exp} To demonstrate the utility of \textsc{Wrench}\xspace in providing fair and rigorous comparisons among WS methods, we conduct extensive experiments on the collected real-world datasets with a unified evaluation protocol. Here, we consider all the possible ways to compose a two-stage method using the initial models that we implement in \textsc{Wrench}\xspace (Table~\ref{tab:methods}), ablating over the choice of soft and hard labels, as well as considering the one-stage methods listed. \subsection{Classification} \subsubsection{Evaluation Protocol} We evaluate the performance of (1) the label model directly applied on test data; (2) the end model trained with labels provided by label model for two-stage methods; (3) the end model trained within a joint model for one-stage methods; and (4) the "gold" method, namely training an end model with ground truth labels, with different end models. We include all the possible two-stage methods as well as the variants using soft or hard labels in our comparison, leading to 83 methods in total. For each dataset, we adopt the evaluation metric used in previous work. For LR and MLP applied on textual datasets, we use a pre-trained BERT model to extract the features. Note that for the Spouse dataset, we do not have the ground truth training labels. In addition, due to privacy issues, for the video frame classification datasets (\ie, Commerical, Tennis Rally and Basketball), we only have access to the features extracted by pre-trained image classifier instead of raw images, thus, we do not include any image classification models but use LR and MLP as end models instead. \subsubsection{Evaluation Results} Due to the space limit, we defer the complete results as well as the standard deviations to the App.\ref{sec:results}, while only presenting the top 3 best WS methods and the gold method with the best end model for each dataset in Table~\ref{tab:classification_main}. From the table, we could observe a diversity of the best WS methods on different datasets. In other words, there is no such method that could consistently outperform others. This observation demonstrates that it remains challenging to design a generic method that works for diverse tasks. In addition, it is unclear which of the soft and hard label leads to the best end model performance. For textual datasets, it is safe to conclude that fine-tuning a large pretrain language model is the best option of the end model, and COSINE could successfully improve the performance of fine-tuned language models. Moreover, fine-tuning a pre-trained language model is, not surprisingly, much better than directly applying label model on test data in most cases, because it is well-known that large pre-trained language models like BERT can easily adapt to new tasks with good generalization performance. However, one exception is CDR, where the label model MeTaL and DP outperform RoBERTa-based COSINE trained with labels provided by DP, even beating the best gold method. \begin{table*}[t] \centering \caption{\textbf{Classification.} The performance of the best gold method and top 3 best weak supervision methods for each dataset. EM and LM stand for the end model and label model respectively. \underline{Underline} indicates using the soft label for training end model. Datasets with * are non-textual data on which BERT/RoBERTa are not applicable. Each metric value is averaged over 5 runs.} \scalebox{0.7}{ \begin{tabular}{ l l c c c c c c c c c c c } \toprule \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{\textbf{Best Gold}} & \multicolumn{3}{c}{\textbf{Top 1}} & \multicolumn{3}{c}{\textbf{Top 2}} & \multicolumn{3}{c}{\textbf{Top 3}} \\ \cmidrule(lr){3-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} \cmidrule(lr){11-13} \textbf{Dataset} & \textbf{Metric} &\textbf{EM} & \textbf{Value} &\textbf{EM} & \textbf{LM} & \textbf{Value} & \textbf{EM} & \textbf{LM} &\textbf{Value} & \textbf{EM} & \textbf{LM} & \textbf{Value} \\ \midrule IMDb &Acc. & R & 93.25 &RC& \underline{MeTaL}&88.86 &RC&\underline{FS}&88.48 &RC&\underline{MV}&88.48 \\\midrule Yelp&Acc. & R & 97.13 &RC&FS&95.45&RC&\underline{FS}&95.33&RC&\underline{DS}&95.01 \\\midrule Youtube&Acc. & B & 97.52 &BC&MV&98.00 &RC&MV&97.60 &RC&\underline{MV}&97.60 \\\midrule SMS&F1 & B & 96.96 &RC&WMV&98.02 &RC&MeTaL&97.71 &RC&\underline{WMV}&97.27 \\\midrule AGNews&Acc. & R & 91.39 &RC&DS&88.20 &RC&MV&88.15 &RC&\underline{WMV}&88.11 \\\midrule TREC &Acc. & R & 96.68 &RC&DP&82.36&RC&\underline{MeTaL}&79.84&BC&DP&78.72 \\\midrule Spouse &F1 & -- & -- & BC &FS&56.52 &--&MeTaL&46.62 &RC&\underline{MV}&46.28 \\\midrule CDR & F1 & R & 65.86 &--&MeTaL&69.61 &--&DP&63.51 &RC&DP&61.40 \\\midrule SemEval & Acc. & B & 95.43 &BC&\underline{DP}&88.77 &BC&MV&86.80 &RC&\underline{DP}&86.73 \\\midrule ChemProt & Acc. & B & 89.76 &BC&\underline{DP}&61.56 &RC&MV&59.43 &RC&\underline{MV}&59.32 \\\midrule Commerical* &F1 & MLP & 91.69 &\multicolumn{2}{c}{Denoise}&91.34 &LR&MV&90.62 &MLP&\underline{MV}&90.55 \\\midrule Tennis Rally* &F1 & LR & 82.73 &MLP&\underline{FS}&83.77 &MLP&\underline{MeTaL}&83.70 &LR&\underline{FS} &83.68 \\\midrule Basketball* &F1 & MLP & 64.97 &MLP&\underline{FS}&43.18 &MLP&\underline{WMV}&40.73 &MLP&\underline{DP}&40.70 \\\midrule Census*&F1 & MLP & 67.13 &LR&\underline{MeTaL}&58.16 &MLP&\underline{MeTaL}&57.84&MLP&MeTaL&57.66 \\ \bottomrule \end{tabular} } \label{tab:classification_main} \end{table*} \subsection{Sequence Tagging} \subsubsection{Evaluation Protocol} Same as the evaluation scheme on classification tasks, we evaluate the performance of (1) the label models; (2) the end models trained by predictions from the label models; (3) the joint models; and (4) the end models trained by gold labels on the training set. Note that following previous works~\cite{lan2020connet,safranchik2020weakly,autoner}, we adopt \emph{hard} labels in order to fit end models which contain CRF layers. To adapt label models designed for classification tasks to sequence tagging tasks, we split each sequence by tokens and reformulate it as a token-level classification task. (We discuss the detailed procedure on adapting label model for sequence tagging tasks in Appendix \ref{sec:adapt}) However, these models neglect the internal dependency between labels within the sequence. In contrast, HMM and CHMM take the whole sequence as input and predict the label for tokens in the whole sequence. For the end model with LSTM/BERT, we run experiments with two settings: (1) stacking a CRF layer on the top of the model, (2) using a classification head for token-level classification; and the best performance is reported. Please see App.~\ref{sec:seq_app} for details. Following the standard protocols, we use \emph{entity-level} F1-score as the metric~\cite{lison2020named,ma2016end} and use \textbf{BIO} schema~\cite{tjong2002introduction,li2021bertifying,lison2020named}, which labels the beginning token of an entity as \texttt{B-X} and the other tokens inside that entity as \texttt{I-X}, while non-entity tokens are marked as \texttt{O}. For methods that predict token-level labels (\eg MV), we transform token-level predictions to entity-level predictions when calculating the F1 score. Since BERT tokenizer may separate a word into multiple subwords, for each word, we use the result of its first token as its prediction. \subsubsection{Evaluation Results} Table~\ref{tab:seq_main2} demonstrates the main result of different methods on sequence tagging tasks. For label models, we conclude that considering dependency relationships among token-level labels during learning generally leads to better performance, as HMM-based models achieve best performance on 7 of 8 datasets. One exception is the MIT-Restaurants dataset, where weak labels have very small coverage. In this case, the simple majority voting-based methods achieve superior performance compared with other complex probabilistic models. For end models, surprisingly, directly training a neural model with weak labels \emph{does not guarantee} the performance gain. Such a phenomenon arrives when the quality of LFs is poor (\eg~MIT-Restaurants, LaptopReview). Under this circumstance, the weak labels generated through LFs are often noisy and incomplete~\cite{liang2020bond,liu2021noisy}, and the end model can easily overfit to them. As a result, there is still a significant performance gap between the results trained by gold labels and weak labels, which motivates the future research on designing methods robust against the induced noise. \begin{table*}[t] \centering \caption{\textbf{Sequence Tagging.} Comparisons among different methods. The number stands for the F1 score. Each metric value is averaged over 5 runs. {\textcolor{red}{red}} and {\color{blue}{blue}} indicate the best and second best result for each end model respectively, and \colorbox{lightgray!60}{gray} is the best weak supervision method. The detailed results including precision, recall and standard deviations are in App.~\ref{sec:seq_app}.} \scalebox{0.57}{ \begin{tabular}{ l c c c c c c c c c } \toprule \textbf{End Model ($\downarrow$)} &\textbf{Label Model ($\downarrow$)} & \textbf{CoNLL-03} & \textbf{WikiGold} & \textbf{BC5CDR} & \textbf{NCBI-Disease}& \textbf{Laptop-Review} & \textbf{MIT-Restaurant} & \textbf{MIT-Movies} & \textbf{Ontonotes 5.0} \\ \midrule \multirow{8}{*}{--} & {MV} & 60.36 & 52.24 & 83.49 & \blue{78.44} & \blue{73.27} & \colorbox{lightgray!60}{\red{48.71}} & 59.68& 58.85 \\ & {WMV} & 60.26 & 52.87 & 83.49 & \blue{78.44} & \blue{73.27} & \blue{48.19} & 60.37 & 57.58 \\ & {DS} & 46.76 & 42.17 & 83.49 & \blue{78.44} & \blue{73.27} & 46.81 & 54.06 & 37.70 \\ & {DP} & 62.43 & 54.81 & \blue{83.50} & \blue{78.44} & \blue{73.27} & 47.92 & 59.92 & \blue{61.85} \\ & {MeTaL} & 60.32 & 52.09 & 83.50 & \blue{78.44} & 64.36 & 47.66 & 56.60 & 58.27 \\ &{FS} & \blue{62.49} & \blue{58.29} & 56.71 & 40.67 & 28.74 & 13.86 & 43.04 & 5.31 \\ & {HMM} & 62.18 & 56.36 & 71.57 & 66.80 & \colorbox{lightgray!60}{\red{73.63}} & 42.65 & \blue{60.56} & 55.67 \\ & {CHMM} & \red{63.22} & \red{58.89} & \colorbox{lightgray!60}{\red{83.66}} & \colorbox{lightgray!60}{\red{78.74}} & 73.26 & 47.34 & \red{61.38} & \red{64.06} \\ \midrule[0.05pt] \midrule[0.05pt] \multirow{10}{*}{LSTM-CNN} & {Gold} & 87.46& 80.45 & 78.59 & 79.39 & 71.25 & 79.18 & 87.07 & 79.52 \\ \cmidrule(lr){2-2} & {MV} & 66.33 & 58.27 & \blue{74.75} & 72.44 & 63.52 & \blue{41.70} & \blue{62.41} & 61.92 \\ & {WMV} & 64.60 & 55.39 & 74.31 & 72.21 & 63.02 & 41.27 & 61.79 & 59.22 \\ & {DS} & 50.60 & 40.61 & \red{75.37} & \red{72.86} & \red{63.96} & 41.21 & 55.99 & 44.92 \\ & {DP} & \red{67.15} & 57.89 & \blue{74.79} & \blue{72.50} & 62.59 & \blue{41.62} & 62.29 & \red{63.82} \\ & {MeTaL} & 65.05 & 56.31 & 74.66 & 72.42 & \blue{63.87} & 41.48 & 62.10 & 60.43 \\ & {FS} & 66.49 & \blue{60.49} & 54.49 & 44.90 & 28.35 & 13.09 & 45.77 & 43.25 \\ & {HMM} & 66.18 & \red{62.51} & 64.07 & 59.12 & 62.57 & 37.90 & 61.94 & 59.43 \\ & {CHMM} & 66.67 & 61.34 & 74.54 & 72.15 & 62.28 & 41.59 & \red{62.97} & \blue{63.71} \\ \midrule[0.05pt] \midrule[0.05pt] \multicolumn{2}{c}{LSTM-ConNet} & 66.02 & 58.04 & 72.04 & 63.04 & 50.36 & 39.26 & 60.46 & 60.58 \\\midrule[0.05pt] \midrule[0.05pt] \multirow{10}{*}{BERT} & {Gold} & 89.41 & 87.21 & 82.49 & 84.05 & 81.22 & 78.85 & 87.56 & 84.11 \\ \cmidrule(lr){2-2} & {MV} & 67.08 & 63.17 & \blue{77.93} & 77.93 &71.12 & \red{42.95} & 63.71 & 63.97 \\ & {WMV} & 65.96& 61.28 & 77.76 & 78.53 & \blue{71.60} & {42.62} & 63.44 & 61.63 \\ & {DS} & 54.04 & 49.09 & 77.57 & \blue{78.69} & 71.41 & 42.26 & 58.89 & 48.55 \\ & {DP} & 67.66 & 62.91 & 77.67 & 78.18 & 71.46 & 42.27 & {63.92} & \blue{65.16} \\ & {MeTaL} & 66.34 & 61.74 & 77.80 & \red{79.02} & \red{71.80} & 42.26 & \blue{64.19} & 63.08 \\ & {FS} & 67.54 & \colorbox{lightgray!60}{\red{66.58}} & 62.89 & 46.50 & 38.57 & 13.80 & 49.79 & 49.63 \\ & {HMM} & \colorbox{lightgray!60}{\red{68.48}} & 64.25 & 68.70 & 65.52 & 71.51 & 39.51 & 63.38 & 61.29 \\ & {CHMM} & \blue{68.30} & \blue{65.16} & \red{77.98} & 78.20 & 71.17 & \blue{42.79} & \colorbox{lightgray!60}{\red{64.58}} & \colorbox{lightgray!60}{\red{66.03}} \\ \midrule[0.05pt] \midrule[0.05pt] \multicolumn{2}{c}{BERT-ConNet} & 67.83 & 64.18 & 72.87 & 71.40 & 67.32 & 42.37 & 64.12 & 60.36 \\ \bottomrule \end{tabular} } \label{tab:seq_main2} \end{table*} \section{Conclusion and Future Work} We introduce \textsc{Wrench}\xspace, a comprehensive benchmark for weak supervision. It includes 22 datasets for classification and sequence tagging with a wide range of domains, modalities, and sources of supervision. Through extensive comparisons, we conclude that designing general-purpose weak supervision methods still remains challenging. We believe that \textsc{Wrench}\xspace provides an increasingly needed foundation for addressing this challenge. In addition, \textsc{Wrench}\xspace provides procedural labeling function generators for systematic study of various types of weak supervision sources. Based on the generators, we study a range of aspects of weak supervision, in order to help understand the weak supervision problem and motivate future research directions. For future work, we plan to include more weak supervision methods and novel tasks, covering more aspects of weak supervision. \section*{Checklist} \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerYes{} \item Did you describe the limitations of your work? \answerYes{} \item Did you discuss any potential negative societal impacts of your work? \answerNA{} \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerYes{} \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerNA{} \item Did you include complete proofs of all theoretical results? \answerNA{} \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerYes{as URLs} \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerYes{} \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerYes{} \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerYes{} \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerYes{} \item Did you mention the license of the assets? \answerYes{} \item Did you include any new assets either in the supplemental material or as a URL? \answerYes{as URLs} \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerNA{} \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerNA{ It does not contain personally identifiable information or offensive content.} \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerNA{} \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerNA{} \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerNA{} \end{enumerate} \end{enumerate} \section{Additional Results} \label{sec:results} \subsection{Classification} The detailed comparisons over the collected classification datasets are in Table~\ref{tab:classification}-\ref{tab:classification_continue}. \begin{table*}[h] \centering \caption{\textbf{Classification}: detailed comparison. Each metric value is averaged over 5 runs. \underline{Underline} indicates using soft label for training end model. {\textcolor{red}{red}} and {\color{blue}{blue}} indicate the best and second best result for each end model respectively, and \colorbox{lightgray!60}{gray} is the best weak supervision method in this table.} \scalebox{0.5}{ \begin{tabular}{ c c c c c c c c c c c c c c c c} \toprule \textbf{End} & \textbf{Label} & \textbf{IMDb} & \textbf{Yelp} & \textbf{Youtube}& \textbf{SMS} & \textbf{AGNews} & \textbf{TREC} & \textbf{Spouse} & \textbf{CDR} & \textbf{SemEval} & \textbf{ChemProt} & \textbf{Commercial} & \textbf{Tennis Rally} & \textbf{Basketball} & \textbf{Census} \\ \textbf{Model ($\downarrow$)} & \textbf{Model ($\downarrow$)} & (Acc.) & (Acc.) & (Acc.) & (F1) & (Acc.) & (Acc.) & (F1) &(F1)& (Acc.) & (Acc.) & (F1) & (F1) & (F1) & (F1)\\ \midrule \multirow{12}{*}{--} & \multirow{2}{*}{MV} &\color{red}{71.04} &\color{red}{70.21} &\color{red}{84.00} &\color{red}{23.97} &63.84 &\color{blue}{60.80} &20.81 &60.31 &\color{red}{77.33} &49.04 &85.28 &81.00 &16.33 & 32.80 \\ & &(0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) &(0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00)\\\cmidrule(lr){2-16} & \multirow{2}{*}{WMV} &\color{red}{71.04} &68.50 &78.00 &\color{red}{23.97} &\color{red}{64.00} &57.20 &20.53 &52.12 &\color{blue}{71.00} &\color{red}{52.08} &83.80 &\color{red}{82.61} &13.13 &9.99 \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00)& (0.00)& (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00)& (0.00)\\\cmidrule(lr){2-16} & \multirow{2}{*}{DS} & 70.60 & \color{blue}{71.45} & \color{blue}{83.20} & 4.94 & 62.76 & 50.00 & 15.53 & 50.43 & \color{blue}{71.00} & 37.59 & \color{red}{88.24} & 80.65 & 13.79 & \color{red}{47.16} \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00)& (0.00)& (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00)& (0.00)\\\cmidrule(lr){2-16} & \multirow{2}{*}{DP} &\color{blue}{70.96} &69.37 &82.00 &\color{blue}{23.78} &\color{blue}{63.90} &\color{red}{64.20} &21.12 &\color{blue}{63.51} &\color{blue}{71.00} &47.42 &77.29 &\color{blue}{82.55} &\color{red}{17.39} & 22.66 \\ & & (0.00) & (0.03) & (2.02) & (0.89) & (0.08) &(0.51)& (0.08) & (0.07) & (0.00) & (0.29) & (0.00) & (0.00) & (0.00)& (0.02)\\\cmidrule(lr){2-16} & \multirow{2}{*}{MeTaL} &70.96 &68.30 &\color{red}{84.00} &7.06 &62.27 &57.60 &\colorbox{lightgray!60}{\red{46.62}} &\colorbox{lightgray!60}{\red{69.61}} &\color{blue}{71.00} &\color{blue}{51.96} &\color{blue}{88.20} &82.52 &13.13 &\color{blue}{44.48} \\ & & (0.59) &(0.43) & (0.00) & (0.00) & (0.27) & (0.00) &(0.00) & (0.01) & (0.00) & (0.00) &(0.00) & (0.04) &(0.00)&(2.34) \\\cmidrule(lr){2-16} & \multirow{2}{*}{FS} &70.36 &68.68 &76.80 &0.00 &60.98 &31.40 &\color{blue}{34.30} &20.18 &31.83 &43.31 &77.31 &82.29 &\color{blue}{17.25} & 15.33 \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) &(0.00) & (0.00) & (0.00) & (0.00) &(0.00) & (0.00) & (0.00)& (0.00) \\\midrule[0.05pt] \midrule[0.05pt] \multirow{26}{*}{LR} & \multirow{2}{*}{Gold} & 81.56 &89.16 &94.24 &93.79 &86.51 &68.56 & -- &63.09 &93.23 &77.96 &91.01 &82.73 &62.82 &67.12 \\ & &(0.20) &(0.27) &(0.41) &(0.61) &(0.28) &(1.15) & -- &(0.36) &(0.31) &(0.25) &(0.12) &(0.65) &(1.57) &(0.52) \\\cmidrule(lr){2-16} & \multirow{2}{*}{MV} & 76.93 &86.21 &90.72 &\color{red}{90.77} &82.69 &57.56 &23.99 &54.44 &82.83 &\color{blue}{55.84} &\color{red}{90.62} &83.59 &26.31 &47.96 \\ & &(0.45) &(0.27) &(1.42) &(1.02) &(0.05) &(4.99) &(0.98) &(0.54) &(1.91) &(0.65) &(0.08) &(0.07) &(4.60) &(4.23) \\\cmidrule(lr){2-16} & \multirow{2}{*}{\underline{MV}} & \color{red}{77.26} &86.33 &\colorbox{lightgray!60}{\red{93.36}} &90.07 &82.69 &62.68 &22.45 &\color{blue}{56.69} &\colorbox{lightgray!60}{\red{85.73}} &\colorbox{lightgray!60}{\red{56.73}} &90.25 &82.15 &\color{blue}{30.67} &51.56 \\ & &(0.14) &(0.19) &(0.93) &(2.62) &(0.14) &(4.56) &(2.79) &(0.65) &(1.08) &(0.33) &(0.28) &(0.23) &(8.52) &(2.59) \\\cmidrule(lr){2-16} & \multirow{2}{*}{WMV} & 76.63 &85.23 &88.80 &90.25 &82.88 &52.88 &20.24 &53.62 &72.70 &54.91 &89.94 &83.57 &23.48 &23.94 \\ & &(0.22) &(0.21) &(0.25) &(0.48) &(0.26) &(4.50) &(2.81) &(0.98) &(4.31) &(0.36) &(0.15) &(0.00) &(16.23) &(14.25) \\\cmidrule(lr){2-16} & \multirow{2}{*}{\underline{WMV}} & 77.03 &86.11 &\color{blue}{92.64} &90.08 &82.84 &\color{blue}{63.84} &23.23 &55.58 &83.87 &56.63 &\color{blue}{90.34} &82.38 &26.65 &30.12 \\ & &(0.38) &(0.20) &(0.41) &(1.20) &(0.05) &(7.60) &(2.08) &(1.70) &(2.13) &(0.49) &(0.17) &(0.22) &(8.40) &(12.97) \\\cmidrule(lr){2-16} & \multirow{2}{*}{DS} & 76.54 & 85.43 & 88.32 & 90.32 & 82.95 & 47.16 & 19.01 & 51.84 & 72.80 & 49.25 & 89.77 & 83.57 & 24.37 & 49.70 \\ & & (0.30) & (0.20) & (0.82) & (1.66) & (0.07) & (1.30) & (2.33) & (0.50) & (2.20) & (1.51) & (0.18) & (0.00) & (11.77) & (0.24) \\\cmidrule(lr){2-16} & \multirow{2}{*}{\underline{DS}} & 77.15 & 85.91 & 88.88 & 89.88 & 82.92 & 50.00 & 17.07 & 49.88 & 72.97 & 48.31 & 89.88 & 83.59 & 20.45 & 50.10 \\ & & (0.36) & (0.14) & (0.93) & (0.87) & (0.21) & (2.54) & (2.22) & (1.44) & (1.10) & (2.19) & (0.22) & (0.04) & (11.09) & (0.39) \\\cmidrule(lr){2-16} & \multirow{2}{*}{DP} & 76.90 &85.38 &90.00 &25.42 &82.04 &\color{red}{64.08} &24.75 &56.24 &72.80 &52.94 &87.44 &83.57 &24.69 &15.71 \\ & & (0.61) &(0.33) &(0.80) &(0.65) &(0.14) &(4.41) &(1.11) &(0.57) &(3.23) &(0.65) &(0.17) &(0.00) &(1.70) &(15.36) \\\cmidrule(lr){2-16} & \multirow{2}{*}{\underline{DP}} & 76.86 &84.98 &89.92 &43.33 &\color{blue}{83.21} &52.96 &22.80 &55.90 &\color{blue}{84.00} &55.15 &87.51 &\color{red}{83.68} &24.94 &21.02 \\ & & (0.27) &(0.36) &(0.93) &(7.04) &(0.27) &(3.11) &(3.68) &(0.74) &(2.35) &(0.54) &(0.18) &(0.00) &(2.70) &(13.55) \\\cmidrule(lr){2-16} & \multirow{2}{*}{MeTaL} &76.30 &86.32 &89.84 &89.13 &83.16 &59.52 &21.77 &56.52 &75.90 &54.60 &90.00 &\color{red}{83.68} &4.66 &\color{blue}{57.39} \\ & & (0.28) &(0.22) &(0.78) &(0.88) &(0.05) &(1.82) &(0.76) &(0.57) &(3.99) &(0.41) &(0.06) &(0.00) &(4.96) &(0.78) \\\cmidrule(lr){2-16} & \multirow{2}{*}{\underline{MeTaL}} &\color{blue}{77.18} &86.41 &88.00 &\color{blue}{90.76} &\color{red}{83.36} &54.64 &22.17 &\color{red}{57.80} &79.73 &55.68 &90.15 &83.57 &25.62 &\colorbox{lightgray!60}{\red{58.16}} \\ & & (0.20) &(0.22) &(2.01) &(0.83) &(0.30) &(3.98) &(1.43) &(0.42) &(2.67) &(0.59) &(0.12) &(0.00) &(17.00) &(0.72) \\\cmidrule(lr){2-16} & \multirow{2}{*}{FS} & 76.74 &\colorbox{lightgray!60}{\red{86.63}} &87.68 &66.04 &82.43 &34.24 &\color{blue}{28.69} &48.68 &31.83 &47.26 &87.18 &\color{blue}{83.64} &\color{red}{31.13} &26.53 \\ & & (0.79) &(0.17) &(0.78) &(5.54) &(0.21) &(1.99) &(1.96) &(0.60) &(0.00) &(0.22) &(0.19) &(0.05) &(2.25) &(15.68) \\ \cmidrule(lr){2-16} & \multirow{2}{*}{\underline{FS}} & 76.84 &\color{blue}{86.48} &88.72 &63.75 &82.86 &35.56 &\color{red}{31.69} &55.53 &40.13 &48.21 &89.21 &\color{red}{83.68} &25.41 &21.37 \\ & & (0.34) &(0.29) &(0.53) &(5.16) &(0.19) &(4.93) &(2.14) &(0.77) &(3.48) &(1.35) &(0.34) &(0.00) &(7.07) &(15.09) \\\midrule[0.05pt] \midrule[0.05pt] \multirow{26}{*}{MLP} & \multirow{2}{*}{Gold} &81.79 &89.19 &94.00 &94.45 &87.69 &66.04 & -- &63.02 &93.33 &80.15 &91.69 &81.48 &64.97 &67.13 \\ & &(0.32) &(0.31) &(0.44) &(0.59) &(0.18) &(4.05) & -- &(0.48) &(0.24) &(0.55) &(0.07) &(0.50) &(13.65) & (0.16) \\\cmidrule(lr){2-16} & \multirow{2}{*}{MV} &77.14 &84.24 &89.44 &89.03 &83.37 &61.40 &21.52 &56.42 &83.13 &\color{blue}{56.04} &\color{blue}{90.42} &81.85 &39.40 &54.62 \\ & &(0.13) &(1.19) &(0.74) &(0.82) &(0.27) &(3.10) &(0.99) &(0.86) &(1.50) &(0.59) &(0.27) &(0.16) &(4.82) &(3.78) \\\cmidrule(lr){2-16} & \multirow{2}{*}{\underline{MV}} &77.10 &84.91 &\color{blue}{90.16} &\colorbox{lightgray!60}{\red{91.91}} &83.41 &\color{blue}{63.88} &22.59 &\color{blue}{57.66} &\color{red}{85.53} &55.83 &\color{red}{90.55} &82.23 &39.84 &56.73 \\ & &(0.37) &(1.28) &(0.60) &(0.73) &(0.20) &(4.49) &(0.66) &(1.09) &(1.07) &(0.63) &(0.27) &(0.14) &(21.02) &(3.77) \\\cmidrule(lr){2-16} & \multirow{2}{*}{WMV} &76.66 &79.17 &88.16 &90.73 &83.62 &59.76 &18.71 &53.77 &72.37 &54.64 &88.59 &83.56 &38.75 &39.04 \\ & &(0.40) &(5.31) &(0.86) &(1.00) &(0.16) &(2.14) &(2.10) &(1.17) &(0.74) &(0.58) &(0.58) &(0.03) &(17.47) & (4.10) \\\cmidrule(lr){2-16} & \multirow{2}{*}{\underline{WMV}} &76.90 &85.45 &\color{red}{92.48} &\color{blue}{91.20} &83.54 &63.48 &19.70 &57.21 &\color{blue}{83.77} &\color{red}{56.52} &90.07 &82.64 &\color{blue}{40.73} &50.86 \\ & &(0.24) &(1.21) &(0.16) &(1.46) &(0.18) &(5.37) &(0.88) &(0.52) &(2.93) &(0.79) &(0.38) &(0.14) &(11.98) & (8.55) \\\cmidrule(lr){2-16} & \multirow{2}{*}{DS} & 76.64 & \color{blue}{86.00} & 88.00 & 88.63 & 83.45 & 47.28 & 17.13 & 51.96 & 73.60 & 48.10 & 88.73& 83.59& 22.79 & 51.19 \\ & & (0.37) & (0.29) & (0.98) & (0.48) & (0.22) & (1.65) & (0.35) & (0.41) & (1.01) & (0.64) & (0.60)& (0.04)& (12.00)& (1.30) \\\cmidrule(lr){2-16} & \multirow{2}{*}{\underline{DS}}& 77.18 & \color{red}{86.06} & 87.44 & 88.82 & \colorbox{lightgray!60}{\red{83.86}} & 49.92 & 16.42 & 51.14 & 72.93 & 44.64 & 89.86& 83.59 & 34.81& 50.06 \\ & & (0.38) & (0.23) & (0.82) & (0.61) & (0.23) & (1.03) & (0.59) & (0.45) & (2.46) & (0.43) & (0.16)& (0.04)& (19.01)& (0.53) \\\cmidrule(lr){2-16} & \multirow{2}{*}{DP} &76.42 &83.98 &90.00 &26.16 &83.05 &\colorbox{lightgray!60}{\red{68.40}} &21.65 &56.69 &72.83 &52.88 &88.40 &83.57 &37.50 &47.54 \\ & &(0.51) &(1.88) &(0.25) &(3.35) &(0.22) &(1.41) &(0.49) &(1.31) &(2.26) &(1.59) &(0.37) &(0.00) &(3.76) &(6.59) \\\cmidrule(lr){2-16} & \multirow{2}{*}{\underline{DP}} &76.77 &80.91 &90.08 &27.15 &83.71 &55.52 &23.77 &45.32 &79.23 &55.52 &88.68 &83.66 &40.70 &54.57 \\ & &(0.38) &(1.56) &(1.11) &(2.98) &(0.17) &(2.83) &(0.94) &(22.67) &(0.31) &(0.77) &(0.24) &(0.04) &(7.20) &(4.21) \\\cmidrule(lr){2-16} & \multirow{2}{*}{MeTaL} &76.35 & 85.61 &88.88 &88.07 &\color{blue}{83.78} &56.32 &20.84 &56.58 &73.00 &55.02 &89.73 &\color{blue}{83.70} &36.74 &\color{blue}{57.66} \\ & &(0.37) &(0.54) &(1.30) &(0.29) &(0.19) &(4.41) &(0.64) &(0.46) &(1.04) &(0.75) &(0.11) &(0.04) &(18.93) &(0.32) \\\cmidrule(lr){2-16} & \multirow{2}{*}{\underline{MeTaL}} &\colorbox{lightgray!60}{\red{77.61}} &85.19 &87.44 &91.10 &83.77 &63.80 &21.17 &\color{red}{58.17} &74.27 &55.52 &89.86 &83.56 &36.35 &\color{red}{57.84} \\ & &(0.36) &(0.16) &(0.90) &(0.97) &(0.21) &(1.25) &(0.49) &(0.21) &(2.87) &(0.82) &(0.08) &(0.03) &(14.00) &(0.83) \\\cmidrule(lr){2-16} & \multirow{2}{*}{FS} &76.78 &84.50 &86.32 &71.81 &83.43 &28.48 &\color{red}{30.55} &49.20 &31.83 &46.46 &88.20 &83.57 &38.53 & 21.93 \\ & &(15.99) &(1.33) &(1.35) &(4.99) &(0.22) &(1.00) &(2.06) &(0.40) &(0.00) &(0.46) &(0.37) &(0.12) &(9.83) &(0.21)\\ \cmidrule(lr){2-16} & \multirow{2}{*}{\underline{FS}} &\color{blue}{77.35} &83.95 &85.20 &37.54 &82.65 &25.60 &\color{blue}{30.37} &49.33 &32.50 &48.23 &89.59 &\colorbox{lightgray!60}{\red{83.77}} &\colorbox{lightgray!60}{\red{43.18}} & 39.03 \\ & &(0.42) &(0.81) &(0.91) &(16.98) &(0.22) &(3.72) &(2.72) &(1.30) &(1.17) &(1.22) &(0.09) &(0.09) &(7.79) & (1.76) \\\midrule[0.05pt]\midrule[0.05pt] \multicolumn{2}{c}{\multirow{2}{*}{Denoise}} & 76.22 &71.56 &76.56 &91.69 &83.45 &56.20 &22.47 &56.54 &80.83 &53.96 &\colorbox{lightgray!60}{\red{91.34}} &82.34 &33.73 &43.71 \\ & & (0.37) &(15.80) &(19.24) &(1.42) &(0.11) &(6.73) &(7.50) &(0.37) &(1.31) &(0.38) &(0.16) &(2.46) &(3.43) &(3.51) \\ \bottomrule \end{tabular} } \label{tab:classification} \end{table*} \begin{table*}[h] \centering \caption{Comparisons on textual datasets among pre-trained language model-based methods.} \scalebox{0.46}{ \begin{tabular}{ c c c c c c c c c c c c} \toprule \textbf{End Model ($\downarrow$)} & \textbf{Label Model ($\downarrow$)} & \textbf{IMDb} & \textbf{Yelp} & \textbf{Youtube}& \textbf{SMS} & \textbf{AGNews} & \textbf{TREC} & \textbf{Spouse} & \textbf{CDR} & \textbf{SemEval} & \textbf{ChemProt} \\ & & (Acc.) & (Acc.) & (Acc.) & (F1) & (Acc.) & (Acc.) & (F1) &(F1)& (Acc.) & (Acc.) \\ \midrule \multirow{26}{*}{B} & \multirow{2}{*}{Gold} &91.58 &95.48 &97.52 &96.96 &90.78 &96.24 & -- &65.39 &95.43 &89.76 \\ & &(0.31) &(0.53) &(0.64) &(0.66) &(0.49) &(0.61) & -- &(1.18) &(0.65) &(0.88) \\\cmidrule(lr){2-12} & \multirow{2}{*}{MV} &79.73 &82.26 &\color{red}{95.36} &94.56 &86.27 &66.56 &19.56 &57.16 &83.93 &56.09 \\ & &(2.60) &(3.50) &(1.71) &(1.88) &(0.53) &(2.31) &(1.22) &(0.83) &(1.74) &(1.08) \\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{MV}} &79.91 &85.64 &93.68 &\color{blue}{94.85} &86.62 &66.56 &19.43 &\color{red}{58.89} &\color{red}{85.03} &\color{red}{57.32} \\ & &(2.23) &(2.52) &(0.47) &(1.16) &(0.28) &(1.20) &(0.95) &(0.50) &(0.83) &(0.98) \\\cmidrule(lr){2-12} & \multirow{2}{*}{WMV} & \color{blue}{81.32} &81.40 &89.92 &91.79 &85.49 &54.64 &19.74 &53.60 &70.97 &55.40 \\ & &(1.35) &(5.06) &(1.51) &(2.67) &(0.63) &(4.85) &(5.48) &(3.25) &(0.24) &(1.02) \\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{WMV}} &80.70 &81.19 &93.76 &\color{red}{95.02} &86.66 &66.00 &19.34 &57.53 &82.47 &55.66 \\ & &(1.39) &(3.74) &(2.10) &(1.26) &(0.44) &(2.33) &(2.87) &(0.46) &(1.29) &(1.36) \\\cmidrule(lr){2-12} & \multirow{2}{*}{DS} & 80.25 & \color{blue}{88.59} & 92.88 & 91.98 & 86.69 & 46.36 & 16.42 & 50.01 & 71.67 & 44.37 \\ & & (2.23) & (1.25) & (0.78) & (1.00) & (0.35) & (3.39) & (0.60) & (0.30) & (0.66) & (0.53) \\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{DS}} & 78.79 & 88.57 & 89.36 & 93.06 & 86.59 & 48.40 & 16.23 & 50.49 & 71.70 & 45.71 \\ & & (1.59) & (2.01) & (2.56) & (1.30) & (0.38) & (0.95) & (0.04) & (0.48) & (0.81) & (1.46) \\\cmidrule(lr){2-12} & \multirow{2}{*}{DP} &80.35 &81.17 &\color{blue}{93.84} &29.97 &85.36 &\color{red}{68.64} &18.66 &\color{blue}{58.48} &71.07 &54.00 \\ & &(2.16) &(4.36) &(1.61) &(2.33) &(0.92) &(3.57) &(1.55) &(0.73) &(0.33) &(1.41) \\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{DP}} &80.82 &82.90 &93.60 &31.96 &86.55 &\color{blue}{68.40} &\color{blue}{28.74} &57.94 &\color{blue}{83.93} &\color{blue}{57.00} \\ & &(1.29) &(3.69) &(0.98) &(2.87) &(0.08) &(2.41) &(7.63) &(0.29) &(0.83) &(1.20) \\\cmidrule(lr){2-12} & \multirow{2}{*}{MeTaL} &80.02 &86.92 &92.32 &92.28 &\color{blue}{86.77} &58.28 &17.26 &58.48 &71.47 &55.48 \\ & &(2.46) &(3.52) &(1.44) &(2.01) &(0.29) &(1.95) &(0.73) &(0.90) &(0.57) &(1.33) \\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{MeTaL}} &81.23 &88.29 &92.48 &90.43 &\color{red}{86.82} &62.44 &17.18 &56.72 &70.80 &56.17 \\ & &(1.23) &(1.57) &(0.99) &(2.64) &(0.23) &(2.96) &(0.23) &(3.26) &(0.87) &(0.66) \\\cmidrule(lr){2-12} & \multirow{2}{*}{FS} &\color{red}{82.26} &87.76 &91.84 &11.62 &86.29 &27.60 &\color{red}{33.63} &4.29 &31.83 &45.66 \\ & &(1.41) &(1.30) &(2.10) &(11.39) &(0.49) &(0.00) &(18.57) &(8.59) &(0.00) &(0.45) \\ \cmidrule(lr){2-12} & \multirow{2}{*}{\underline{FS}} &81.20 &\color{red}{88.86} &91.60 &7.32 &85.51 &30.96 &9.14 &35.25 &31.83 &49.53 \\ & &(1.01) &(0.92) &(2.18) &(5.35) &(0.62) &(4.04) &(18.29) &(5.75) &(0.00) &(1.14) \\\midrule[0.05pt] \midrule[0.05pt] \multirow{24}{*}{BC} & \multirow{2}{*}{MV} &82.98 &89.22 &\colorbox{lightgray!60}{\red{98.00}} &\color{red}{97.01} &87.03 &\color{blue}{76.56} &32.39 &58.99 &\color{blue}{86.80} &\color{blue}{58.47} \\ & & (0.05) &(0.05) &(0.00) &(0.00) &(0.00) &(0.08) &(3.41) &(0.09) &(0.46) &(0.08)\\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{MV}} & 83.14 &89.64 &95.44 &\color{blue}{96.85} &87.14 &68.56 &42.71 &59.26 &86.13 &58.01 \\ & & (0.42) &(0.03) &(0.20) &(0.31) &(0.00) &(1.13) &(5.47) &(0.17) &(0.19) &(0.02)\\\cmidrule(lr){2-12} & \multirow{2}{*}{WMV} &83.69 &90.40 &93.44 &95.95 &86.25 &60.48 &36.27 &58.29 &82.90 &56.10 \\ & & (0.04) &(0.65) &(0.20) &(0.35) &(0.01) &(0.10) &(3.01) &(0.18) &(0.08) &(0.42)\\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{WMV}} &83.28 &87.87 &\color{blue}{97.20} &96.34 &87.22 &70.88 &32.49 &\color{blue}{59.55} &86.70 &57.93 \\ & & (0.12) &(0.00) &(0.00) &(0.31) &(0.00) &(1.14) &(1.54) &(0.09) &(0.22) &(0.00)\\\cmidrule(lr){2-12} & \multirow{2}{*}{DS} & \color{red}{91.54} & 90.84 & 94.16 & 93.90 & 87.19 & 53.36 & 23.33 & 52.09 & 72.50 & 49.65 \\ & & (0.54) & (0.30) & (0.20) & (0.05) & (0.0) & (0.29) & (0.70) & (0.03) & (0.00) & (0.68) \\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{DS}} & 80.48 & \color{blue}{91.12} & 93.04 & 95.37 & 87.06 & 51.72 & 24.76 & 51.73 & 72.83 & 49.43 \\ & & (0.0) & (0.11) & (0.20) & (0.08) & (0.01) & (1.17) & (0.57) & (0.04) & (0.00) & (1.15) \\\cmidrule(lr){2-12} & \multirow{2}{*}{DP} &\color{blue}{84.58} &88.44 &96.32 &33.70 &86.98 &\color{red}{78.72} &30.71 &\color{red}{60.46} &75.77 &57.51 \\ & & (0.08) &(0.03) &(0.16) &(0.00) &(0.39) &(0.43) &(9.78) &(0.11) &(1.33) &(0.02)\\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{DP}} &82.73 &91.02 &94.80 &36.44 &86.67 &72.40 &33.83 &58.47 &\colorbox{lightgray!60}{\red{88.77}} &\colorbox{lightgray!60}{\red{61.56}} \\ & &(0.03) &(0.13) &(0.00) &(0.00) &(0.00) &(0.00) &(0.00) &(0.16) &(0.13) &(0.06) \\\cmidrule(lr){2-12} & \multirow{2}{*}{MeTaL} &83.47 &89.76 &94.88 &95.62 &\color{blue}{87.26} &61.80 &35.84 &59.33 &79.20 &55.46 \\ & & (0.12) &(0.00) &(0.53) &(0.31) &(0.02) &(0.00) &(6.73) &(0.04) &(2.33) &(0.12)\\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{MeTaL}} &83.83 &90.68 &94.72 &93.75 &\color{red}{87.41} &71.20 &27.23 &59.14 &81.20 &57.85 \\ & &(0.14) &(0.05) &(0.16) &(0.00) &(0.01) &(0.36) &(2.80) &(0.04) &(0.64) &(0.26)\\\cmidrule(lr){2-12} & \multirow{2}{*}{FS} &84.40 &89.05 &94.80 &62.27 &87.16 &27.60 &\colorbox{lightgray!60}{\red{56.52}} &48.89 &31.83 &48.10 \\ & & (0.00) &(0.07) &(0.00) &(0.17) &(0.16) &(0.00) &(0.32) &(0.08) &(0.00) &(0.60)\\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{FS}} &82.64 &\color{red}{91.18} &96.16 &63.54 &86.57 &36.20 &\color{blue}{53.46} &55.69 &31.83 &49.35 \\ & &(0.19) &(0.03) &(0.20) &(4.71) &(0.00) &(0.00) &(0.13) &(0.03) &(0.00) &(0.00)\\\midrule[0.05pt] \midrule[0.05pt] \multirow{26}{*}{R} & \multirow{2}{*}{Gold} &93.25 &97.13 &95.68 &96.31 &91.39 &96.68 & -- &65.86 &93.23 &86.98 \\ & &(0.30) &(0.26) &(1.42) &(0.58) &(0.38) &(0.82) & -- &(0.60) &(1.83) &(1.49) \\\cmidrule(lr){2-12} & \multirow{2}{*}{MV} &85.76 &89.91 &\color{red}{96.56} &\color{blue}{94.17} &86.88 &66.28 &17.99 &55.07 &\color{blue}{84.00} &\color{blue}{56.85} \\ & &(0.70) &(1.76) &(0.86) &(2.88) &(0.98) &(1.21) &(1.99) &(3.47) &(0.84) &(1.91) \\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{MV}} &86.17 &87.87 &\color{blue}{95.60} &\color{red}{95.06} &87.14 &66.16 &\color{red}{21.68} &54.96 &\color{red}{84.13} &\color{red}{57.31} \\ & &(1.31) &(1.18) &(0.80) &(1.66) &(0.18) &(1.25) &(8.32) &(5.42) &(0.59) &(1.07) \\\cmidrule(lr){2-12} & \multirow{2}{*}{WMV} &86.06 &82.27 &92.96 &92.96 &86.70 &58.88 &16.14 &42.37 &67.47 &46.56 \\ & &(0.88) &(4.11) &(1.73) &(1.71) &(0.51) &(0.92) &(1.40) &(21.19) &(6.93) &(11.71) \\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{WMV}} &86.03 &86.06 &95.52 &93.96 &86.99 &63.64 &17.43 &54.88 &82.87 &55.57 \\ & &(1.03) &(3.97) &(0.99) &(1.11) &(0.37) &(1.94) &(1.21) &(3.82) &(2.49) &(0.78) \\\cmidrule(lr){2-12} & \multirow{2}{*}{DS} & 84.74 & \color{blue}{92.30} & 93.52 & 94.10 & 87.16 & 48.32 & 16.57 & 50.77 & 69.67 & 45.69 \\ & & (1.41) & (1.75) & (1.39) & (1.72) & (0.58) & (1.50) & (0.25) & (0.12) & (1.18) & (0.86) \\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{DS}} & 86.85 & 92.06 & 92.96 & 93.17 & 86.82 & 50.12 & 16.93 & 50.85 & 70.80 & 46.96 \\ & & (0.72) & (1.20) & (1.53) & (0.89) & (0.29) & (1.99) & (0.52) & (0.37) & (0.61) & (0.38) \\\cmidrule(lr){2-12} & \multirow{2}{*}{DP} &86.26 &89.59 &\color{blue}{95.60} &28.25 &86.81 &\color{red}{72.12} &17.62 &54.42 &70.57 &39.91 \\ & &(1.02) &(2.87) &(0.80) &(2.83) &(0.42) &(4.58) &(4.24) &(5.32) &(0.83) &(9.33) \\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{DP}} &84.86 &85.73 &94.48 &46.66 &\color{red}{87.65} &\color{blue}{66.80} &17.71 &\color{blue}{57.78} &72.60 &56.18 \\ & &(0.58) &(3.49) &(1.17) &(11.89) &(0.37) &(0.85) &(2.27) &(0.79) &(20.40) &(1.12) \\\cmidrule(lr){2-12} & \multirow{2}{*}{MeTaL} &84.98 &89.08 &94.56 &93.28 &\color{blue}{87.18} &60.04 &16.42 &53.68 &70.73 &54.59 \\ & &(1.07) &(3.71) &(0.65) &(1.57) &(0.45) &(1.18) &(2.79) &(4.00) &(0.68) &(0.77) \\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{MeTaL}} &\color{red}{87.23} &92.22 &94.08 &93.00 &86.87 &65.60 &\color{blue}{20.80} &\color{red}{59.19} &70.27 &42.02 \\ & &(0.97) &(1.14) &(1.70) &(1.42) &(0.37) &(1.67) &(7.13) &(0.35) &(0.88) &(11.91) \\\cmidrule(lr){2-12} & \multirow{2}{*}{FS} &86.95 &92.08 &93.84 &10.72 &86.69 &30.44 &0.00 &0.00 &31.83 &39.95 \\ & &(0.58) &(2.63) &(1.57) &(10.15) &(0.29) &(3.48) &(0.00) &(0.00) &(0.00) &(6.50) \\ \cmidrule(lr){2-12} & \multirow{2}{*}{\underline{FS}} &\color{blue}{87.10} &\color{red}{94.34} &93.20 &18.20 &86.17 &28.84 &0.00 &0.00 &31.83 &39.43 \\ & &(1.06) &(0.89) &(3.19) &(3.93) &(0.78) &(2.48) &(0.00) &(0.00) &(0.00) &(8.74) \\\midrule[0.05pt] \midrule[0.05pt] \multirow{24}{*}{RC} & \multirow{2}{*}{MV} &88.22 &94.23 &\color{red}{97.60} &96.67 &\color{blue}{88.15} &77.96 &\color{blue}{40.50} &60.38 &\color{blue}{86.20} &\color{red}{59.43} \\ & &(0.22) &(0.20) &(0.00) &(0.37) &(0.30) &(0.34) &(1.23) &(0.05) &(0.07) &(0.00) \\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{MV}} &\color{blue}{88.48} &91.06 &\color{red}{97.60} &96.82 &88.04 &74.28 &\color{red}{46.28} &\color{blue}{61.13} &83.93 &\color{blue}{59.32} \\ & &(0.00) &(0.39) &(0.00) &(0.29) &(0.06) &(0.75) &(1.59) &(0.12) &(0.20) &(0.06) \\\cmidrule(lr){2-12} & \multirow{2}{*}{WMV} &87.46 &92.53 &95.60 &\colorbox{lightgray!60}{\red{98.02}} &87.83 &70.28 &20.76 &56.27 &72.77 &55.58 \\ & &(0.05) &(0.06) &(0.00) &(0.38) &(0.13) &(1.09) &(1.44) &(0.10) &(0.48) &(0.34) \\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{WMV}} &88.00 &93.16 &\color{blue}{97.20} &97.27 &88.11 &72.08 &30.07 &58.66 &84.67 &58.31 \\ & &(0.00) &(0.03) &(0.00) &(0.36) &(0.05) &(1.01) &(2.35) &(0.46) &(0.00) &(0.09) \\\cmidrule(lr){2-12} & \multirow{2}{*}{DS} & 88.01 & 94.19 & 96.24 & 96.79 & \colorbox{lightgray!60}{\red{88.20}} & 59.40 & 21.34 & 51.37 & 71.70 & 46.75 \\ & & (0.56) & (0.18) & (0.41) & (0.27) & (0.11) & (0.42) & (1.19) & (0.60) & (0.07) & (0.27) \\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{DS}} & 87.77 & 95.01 & 95.52 & 97.10 & 87.21 & 57.96 & 28.75 & 52.25 & 77.03 & 49.23 \\ & & (0.05) & (0.25) & (0.30) & (0.31) & (0.0) & (0.15) & (1.03) & (0.25) & (0.52) & (0.11) \\\cmidrule(lr){2-12} & \multirow{2}{*}{DP} & 87.91 &94.09 &96.80 &31.71 &87.53 &\colorbox{lightgray!60}{\red{82.36}} &28.86 &\colorbox{lightgray!60}{\red{61.40}} &75.17 &52.86 \\ & &(0.15) &(0.06) &(0.00) &(0.29) &(0.03) &(0.08) &(10.02) &(0.07) &(0.95) &(0.06) \\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{DP}} &87.30 &94.40 &95.60 &64.22 &88.04 &74.00 &21.74 &59.86 &\color{red}{86.73} &55.96 \\ & &(0.66) &(0.37) &(0.00) &(0.00) &(0.00) &(0.77) &(0.00) &(0.17) &(0.08) &(0.06) \\\cmidrule(lr){2-12} & \multirow{2}{*}{MeTaL} &86.46 &93.11 &97.04 &\color{blue}{97.71} &87.85 &71.64 &23.99 &58.29 &70.90 &53.32 \\ & &(0.11) &(0.01) &(0.20) &(0.00) &(0.02) &(0.59) &(8.47) &(0.39) &(0.08) &(0.19) \\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{MeTaL}} &\colorbox{lightgray!60}{\red{88.86}} &93.95 &96.00 &96.18 &87.43 &\color{blue}{79.84} &21.89 &60.16 &84.20 &56.89 \\ & &(0.14) &(0.00) &(0.00) &(0.00) &(0.01) &(0.23) &(4.72) &(0.16) &(0.16) &(0.11) \\\cmidrule(lr){2-12} & \multirow{2}{*}{FS} &87.65 &\colorbox{lightgray!60}{\red{95.45}} &95.20 &82.24 &87.73 &38.80 &16.06 &38.14 &31.83 &48.60 \\ & &(0.06) &(0.10) &(0.00) &(0.93) &(0.12) &(0.33) &(0.15) &(6.62) &(0.00) &(0.11) \\\cmidrule(lr){2-12} & \multirow{2}{*}{\underline{FS}} &\color{blue}{88.48} &\color{blue}{95.33} &96.80 &65.65 &87.23 &33.80 &0.00 &0.00 &31.83 &39.89 \\ & &(0.00) &(0.06) &(0.00) &(0.00) &(0.00) &(0.00) &(0.00) &(0.00) &(0.00) &(0.00) \\ \bottomrule \end{tabular} } \label{tab:classification_continue} \end{table*} \subsection{Sequence Tagging} \label{sec:seq_app} The detailed comparisons over the collected sequence tagging datasets are in Table~\ref{tab:seq_full}. \begin{table*}[t] \centering \caption{\textbf{Sequence Tagging.} The detailed results for different methods. The number stands for the F1 score (Precision, Recall) with standard deviation in the bracket under each value. Each metric value is averaged over 5 runs. {\textcolor{red}{red}} and {\color{blue}{blue}} indicate the best and second best result for each end model respectively, and \colorbox{lightgray!60}{gray} is the best weak supervision method.} \scalebox{0.43}{ \begin{tabular}{ l c c c c c c c c c } \toprule \textbf{End Model ($\downarrow$)} &\textbf{Label Model ($\downarrow$)} & \textbf{CoNLL-03} & \textbf{WikiGold} & \textbf{BC5CDR} & \textbf{NCBI-Disease}& \textbf{Laptop-Review} & \textbf{MIT-Restaurant} & \textbf{MIT-Movies} & \textbf{Ontonotes 5.0} \\ \midrule \multirow{16}{*}{--} & \multirow{2}{*}{MV} & 60.36(59.06/61.72) & 52.24(48.95/56.00) & 83.49(91.69/76.64) & \blue{78.44(93.04/67.79) } & \blue{73.27(88.86/62.33) } & \colorbox{lightgray!60}{\red{48.71(74.25/36.24) }} & 59.68(69.92/52.05) & 58.85(54.17/64.40) \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{WMV} & 60.26(59.03/61.54) & 52.87(50.74/55.20) & 83.49(91.66/76.66) & \blue{78.44(93.04/67.79) } & \blue{73.27(88.86/62.33) } & \blue{48.19(73.73/35.80) } & 60.37(70.98/52.52) & 57.58(53.15/62.81) \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{DS} & 46.76(45.29/48.32) & 42.17(40.05/44.53) & 83.49(91.66/76.66) & \blue{78.44(93.04/67.79) } & \blue{73.27(88.86/62.33) } & 46.81(71.71/34.75) & 54.06(63.64/46.99) & 37.70(34.33/41.82) \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{DP} & 62.43(61.62/63.26) & 54.81(53.10/56.64) & \blue{83.50(91.69/76.65) } & \blue{78.44(93.04/67.79) } & \blue{73.27(88.86/62.33) } & 47.92(73.24/35.61) & 59.92(70.65/52.01) & \blue{61.85(57.44/66.99) } \\ & & (0.22) & (0.13) & (0.00) & (0.00) & (0.00) & (0.00) & (0.43) & (0.19) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{MeTaL} & 60.32(59.07/61.63) & 52.09(50.31/54.03) & 83.50(91.66/76.67) & \blue{78.44(93.04/67.79) } & 64.36(83.21/53.63) & 47.66(73.40/35.29) & 56.60(72.28/47.70) & 58.27(54.10/63.14) \\ & & (0.08) & (0.23) & (0.00) & (0.00) & (17.81) & (0.00) & (7.71) & (0.48) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{FS} & \blue{62.49(63.25/61.76) } & \blue{58.29(62.77/54.40) } & 56.71(88.03/41.83) & 40.67(72.24/28.30) & 28.74(60.59/18.84) & 13.86(84.10/7.55) & 43.04(77.73/29.75) & 5.31(2.87/35.74) \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{HMM} & 62.18(66.42/58.45) & 56.36(61.51/52.00) & 71.57(93.48/57.98) & 66.80(96.79/51.00) & \colorbox{lightgray!60}{\red{73.63(89.30/62.63) }} & 42.65(71.44/30.40) & \blue{60.56(75.04/50.76) } & 55.67(57.95/53.57) \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00)\\ \cmidrule(lr){2-10} & \multirow{2}{*}{CHMM} & \red{63.22(61.93/64.56) } & \red{58.89(55.71/62.45) } & \colorbox{lightgray!60}{\red{83.66(91.76/76.87) }} & \colorbox{lightgray!60}{\red{78.74(93.21/68.15) }} & 73.26(88.79/62.36) & 47.34(73.05/35.02) & \red{61.38(73.00/52.96) } & \red{64.06(59.70/69.09) } \\ & & (0.26) & (0.97) & (0.04) & (0.10) & (0.13) & (0.57) & (0.10) & (0.07) \\ \midrule[0.05pt] \midrule[0.05pt] \multirow{20}{*}{LSTM-CNN-MLP} & \multirow{2}{*}{Gold} & 87.46(87.72/87.19) & 80.45(80.80/80.11) & 78.02(79.80/76.34) & 79.41(80.94/77.97) & 69.83(74.51/65.73) & 77.80(78.42/77.19) & 86.18(87.05/85.33) & 79.52(79.91/79.14) \\ & & (0.35) & (1.46) & (0.20) & (0.54) & (0.51) & (0.28) & (0.40) & (0.40) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{MV} & 66.33(67.54/65.19) & 58.27(56.65/60.00) & \red{74.56(79.70/70.05) } & 70.54(80.95/62.51) & 62.32(74.03/53.81) & \red{41.30(63.57/30.59) } & 61.50(72.78/53.25) & 60.04(56.89/63.57) \\ & & (0.52) & (0.67) & (0.21) & (0.79) & (1.24) & (0.37) & (0.22) & (0.53) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{WMV} & 64.60(65.31/63.91) & 53.86(54.01/53.76) & 74.29(79.95/69.39) & 70.94(81.81/62.64) & 61.73(72.92/53.54) & 40.60(62.48/30.07) & 61.31(72.62/53.05) & 58.47(55.46/61.83) \\ & & (0.73) & (0.48) & (0.16) & (0.81) & (0.66) & (0.26) & (0.25) & (0.09) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{DS} & 50.60(49.35/51.90) & 40.01(38.00/42.24) & 74.21(80.15/69.11) & 70.71(81.38/62.53) & \red{63.12(74.62/54.70) } & 39.86(61.98/29.37) & 54.59(67.94/45.63) & 41.13(40.55/41.75) \\ & & (0.72) & (3.13) & (0.34) & (0.67) & (0.59) & (0.23) & (0.45) & (0.98) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{DP} & \red{66.98(68.95/65.13) } & 57.87(58.34/57.44) & 74.26(79.09/70.00) & 70.75(80.69/63.06) & 61.02(72.90/52.47) & \blue{41.06(63.10/30.43) } & 61.05(72.66/52.64) & \blue{62.68(59.51/66.21) } \\ & & (0.50) & (1.27) & (0.33) & (1.35) & (0.76) & (0.17) & (0.45) & (0.06) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{MeTaL} & 65.05(66.20/63.94) & 56.31(55.94/56.69) & \blue{74.37(79.86/69.62) } & \blue{71.23(80.59/63.82) } & 62.43(74.18/53.91) & 40.97(62.58/30.46) & 61.02(72.44/52.71) & 59.15(55.99/62.68) \\ & & (0.73) & (1.64) & (0.20) & (0.78) & (0.47) & (0.60) & (0.32) & (0.45)\\ \cmidrule(lr){2-10} & \multirow{2}{*}{FS} & 66.49(69.13/64.05) & 59.80(65.71/54.88) & 54.37(77.11/42.00) & 42.70(71.11/30.52) & 27.08(54.14/18.13) & 13.09(74.74/7.17) & 45.77(75.50/32.84) & 41.54(44.13/39.48) \\ & & (0.25) & (1.65) & (0.42) & (1.03) & (3.20) & (0.30) & (0.61) & (2.06) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{HMM} & 64.85(69.48/60.81) & \blue{60.87(67.45/55.47) } & 64.07(79.54/53.67) & 57.80(81.75/44.71) & \blue{62.53(74.62/53.81) } & 37.47(60.36/27.17) & \blue{61.55(76.16/51.65) } & 58.17(61.13/55.49) \\ & & (1.41) & (0.95) & (0.38) & (0.60) & (0.48) & (0.50) & (0.24) & (0.23) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{CHMM} & \blue{66.67(67.75/65.64) } & \red{61.34(60.49/62.24) } & 74.29(79.55/69.70) & \red{71.45(80.90/64.01) } & 62.18(73.86/53.72) & 40.97(63.11/30.33) & \red{62.05(73.57/53.65) } & \red{63.70(60.58/67.17) } \\ & & (0.25) & (1.79) & (0.11) & (0.72) & (0.84) & (0.17) & (0.23) & (0.45) \\ \midrule[0.05pt] \midrule[0.05pt] \multirow{22}{*}{LSTM-CNN-CRF} & \multirow{2}{*}{Gold} & 86.80(86.80/86.80) & 79.79(79.79/79.79) & 78.59(80.90/76.41) & 79.39(80.34/78.48) & 71.25(76.37/66.80) & 79.18(79.68/78.69) & 87.07(87.49/86.64) & 79.42(80.14/78.72) \\ & & (0.74) & (0.49) & (0.70) & (0.59) & (1.80) & (0.20) & (0.19) & (2.76) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{MV} & 65.97(67.14/64.85) & 57.04(54.91/59.36) & 74.75(79.90/70.22) & 72.44(82.56/64.60) & 63.52(75.14/55.01) & \red{41.70(63.92/30.95) } & \blue{62.41(74.45/53.72) } & 61.92(59.47/64.59) \\ & & (0.81) & (1.33) & (0.60) & (1.44) & (0.96) & (0.21) & (0.28) & (0.55) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{WMV} & 63.76(63.76/63.76) & 55.39(54.30/56.53) & 74.31(79.59/69.69) & 72.21(83.05/63.89) & 63.02(75.36/54.15) & 41.27(63.20/30.64) & 61.79(73.72/53.19) & 59.22(56.76/61.91) \\ & & (1.06) & (0.95) & (0.63) & (1.33) & (0.98) & (0.31) & (0.42) & (0.31) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{DS} & 49.74(48.66/50.91) & 40.61(38.31/43.20) & \red{75.37(80.88/70.56) } & \red{72.86(82.69/65.15) } & \red{63.96(75.27/55.62) } & 41.21(63.02/30.61) & 55.99(68.63/47.29) & 44.92(42.91/47.16) \\ & & (1.41) & (1.89) & (0.28) & (1.01) & (0.79) & (0.34) & (0.50) & (1.96) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{DP} & \red{67.15(67.47/66.83) } & 57.89(57.56/58.24) & \blue{74.79(80.48/69.91) } & \blue{72.50(82.83/64.48) } & 62.59(74.16/54.15) & \blue{41.62(63.43/30.97) } & 62.29(74.31/53.63) & \red{63.82(61.11/66.80) } \\ & & (0.69) & (1.99) & (0.68) & (0.86) & (0.83) & (0.47) & (0.22) & (0.29) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{MeTaL} & 64.48(65.77/63.29) & 55.37(54.26/56.53) & 74.66(79.95/70.03) & 72.42(83.41/64.01) & \blue{63.87(75.34/55.44) } & 41.48(63.09/30.90) & 62.10(73.97/53.52) & 60.43(57.99/63.08) \\ & & (0.85) & (1.69) & (0.88) & (1.44) & (1.53) & (0.45) & (0.48) & (0.31) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{FS} & 66.21(68.71/63.90) & \blue{60.49(65.46/56.27) } & 54.49(77.53/42.02) & 44.90(74.39/32.19) & 28.35(54.68/19.20) & 12.74(71.00/7.00) & 45.62(77.19/32.38) & 43.25(46.46/40.56) \\ & & (0.79) & (3.30) & (0.47) & (1.15) & (1.61) & (0.13) & (0.44) & (0.53) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{HMM} & 66.18(70.66/62.24) & \red{62.51(71.09/55.79) } & 63.68(77.69/53.98) & 59.12(84.26/45.55) & 62.57(74.21/54.09) & 37.90(62.48/27.20) & 61.94(76.85/51.89) & 59.43(61.53/57.47) \\ & & (1.27) & (1.21) & (0.71) & (1.15) & (0.30) & (0.56) & (0.44) & (0.62) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{CHMM} & \blue{66.58(67.05/66.17) } & 59.90(57.38/62.67) & 74.54(80.49/69.44) & 72.15(82.51/64.12) & 62.28(74.15/53.69) & 41.59(63.67/30.89) & \red{62.97(74.82/54.36) } & \blue{63.71(61.40/66.20) } \\ & & (0.75) & (2.86) & (0.50) & (1.42) & (0.69) & (0.40) & (0.30) & (0.36) \\ \midrule[0.05pt] \midrule[0.05pt] \multicolumn{2}{c}{\multirow{2}{*}{LSTM-ConNet}} & 66.02(67.98/64.19) & 58.04(61.10/55.36) & 72.04(77.71/67.18) & 63.04(74.55/55.16) & 50.36(63.04/42.73) & 39.26(61.74/28.78) & 60.46(75.61/50.38) & 60.58(59.43/61.83) \\ & & (0.95) & (1.60) & (0.54) & (12.69) & (7.74) & (0.46) & (0.90) & (0.46) \\ \midrule[0.05pt] \midrule[0.05pt] \multirow{20}{*}{BERT-MLP} & \multirow{2}{*}{Gold} & 89.41(90.06/88.76) & 87.21(87.88/86.56) & 82.49(86.67/78.70) & 84.05(84.08/84.03) & 81.22(83.67/78.96) & 78.85(78.74/78.96) & 87.56(87.57/87.54) & 84.11(83.11/85.14) \\ & & (0.21) & (0.65) & (0.28) & (0.29) & (1.20) & (0.33) & (0.21) & (0.55)\\ \cmidrule(lr){2-10} & \multirow{2}{*}{MV} & 67.08(68.35/65.86) & 63.17(64.15/62.29) & \blue{77.93(84.26/72.50) } & 77.93(85.84/71.38) & 69.88(75.99/64.84) & 41.89(62.80/31.43) & 63.20(73.74/55.35) & 63.86(61.26/66.71) \\ & & (0.71) & (2.15) & (0.43) & (0.73) & (1.17) & (0.59) & (0.70) & (0.60) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{WMV} & 65.96(66.88/65.07) & 61.28(63.88/59.25) & 77.76(84.63/71.94) & 78.53(86.44/71.97) & \blue{71.60(76.96/66.95) } & \red{42.40(62.88/31.98) } & 62.90(72.32/55.66) & 61.63(59.19/64.29) \\ & & (0.52) & (2.18) & (0.47) & (0.85) & (0.68) & (0.57) & (0.47) & (0.48) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{DS} & 54.04(54.24/53.91) & 49.09(46.69/51.79) & 77.57(84.62/71.62) & \blue{78.69(86.37/72.27) } & 71.41(76.00/67.38) & 41.14(61.70/30.86) & 58.62(68.60/51.19) & 45.32(42.74/48.24) \\ & & (0.90) & (1.66) & (0.20) & (0.55) & (0.88) & (0.72) & (0.56) & (2.14)\\ \cmidrule(lr){2-10} & \multirow{2}{*}{DP} & 67.66(68.82/66.55) & 62.91(64.44/61.49) & 77.67(83.87/72.33) & 78.18(85.91/71.74) & 70.86(77.70/65.15) & 42.06(62.49/31.70) & \blue{63.28(73.36/55.64) } & \blue{65.16(63.15/67.33) } \\ & & (0.73) & (1.23) & (0.40) & (0.71) & (1.10) & (0.31) & (0.35) & (0.74) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{MeTaL} & 66.34(67.46/65.28) & 61.74(61.55/61.97) & 77.80(83.77/72.63) & \red{79.02(85.98/73.12) } & \red{71.80(76.17/67.93) } & 42.07(62.69/31.66) & 63.00(73.02/55.40) & 63.08(60.94/65.36) \\ & & (1.29) & (1.84) & (0.21) & (0.59) & (0.81) & (0.74) & (0.28) & (0.46) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{FS} & 67.54(69.81/65.43) & \colorbox{lightgray!60}{\red{66.58(72.23/61.76) }} & 62.89(79.81/52.02) & 46.50(72.13/34.34) & 36.87(63.98/26.09) & 13.52(71.73/7.47) & 49.37(75.67/36.64) & 49.63(57.13/43.91) \\ & & (1.32) & (1.40) & (1.39) & (1.32) & (1.96) & (0.74) & (0.34) & (2.49) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{HMM} & \colorbox{lightgray!60}{\red{68.48(71.04/66.17) }} & 64.25(68.96/60.16) & 68.70(81.86/59.20) & 65.52(87.25/52.52) & 71.51(75.86/67.66) & 38.10(61.40/27.62) & 63.07(76.72/53.55) & 61.13(63.55/58.93) \\ & & (0.16) & (1.65) & (0.82) & (1.44) & (0.58) & (0.57) & (0.53) & (0.47) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{CHMM} & \blue{68.30(69.10/67.54) } & \blue{65.16(63.45/66.99) } & \red{77.98(83.74/72.98) } & 78.20(85.04/72.40) & 70.58(77.41/64.87) & \blue{42.10(62.88/31.64) } & \red{63.68(73.49/56.18) } & \colorbox{lightgray!60}{\red{66.03(63.42/68.87) }} \\ & & (0.44) & (0.67) & (0.13) & (0.71) & (0.48) & (0.27) & (0.29) & (0.29)\\ \midrule[0.05pt] \midrule[0.05pt] \multirow{22}{*}{BERT-CRF} & \multirow{2}{*}{Gold} & 87.38(87.70/87.06) & 86.78(87.27/86.29) & 79.65(79.48/79.83) & 80.64(81.50/79.83) & 79.15(81.26/77.18) & 78.83(79.14/78.53) & 87.03(87.12/86.94) & 83.86(82.32/85.45) \\ & & (0.34) & (0.84) & (0.29) & (0.27) & (0.77) & (0.44) & (0.91) & (0.18) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{MV} & 66.63(67.68/65.62) & 62.09(61.89/62.29) & 74.93(75.84/74.04) & 72.87(83.57/64.63) & 71.12(76.74/66.34) & \red{42.95(63.18/32.54) } & 63.71(73.46/56.25) & 63.97(61.15/67.08) \\ & & (0.85) & (1.06) & (0.32) & (0.62) & (1.83) & (0.43) & (0.23) & (0.58) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{WMV} & 64.38(66.55/62.35) & 59.96(60.33/59.73) & 75.32(77.59/73.18) & \blue{73.23(83.77/65.07) } & 71.09(76.16/66.68) & 42.62(63.56/32.06) & 63.44(72.85/56.19) & 61.29(58.70/64.14) \\ & & (1.09) & (1.08) & (0.39) & (0.71) & (0.73) & (0.23) & (0.29) & (0.32) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{DS} & 53.89(54.10/53.68) & 48.89(46.80/51.20) & \red{75.42(76.91/74.00) } & 72.91(82.60/65.30) & 70.19(76.49/64.87) & 42.26(62.65/31.89) & 58.89(69.67/51.01) & 48.55(46.97/50.26) \\ & & (1.42) & (1.59) & (0.32) & (0.52) & (1.46) & (0.78) & (0.34) & (1.23) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{DP} & 65.48(66.76/64.28) & 61.09(61.07/61.12) & 75.08(76.78/73.47) & 72.86(81.54/65.85) & \red{71.46(76.42/67.14) } & 42.27(62.81/31.86) & 63.92(73.05/56.84) & \blue{65.09(63.27/67.04) } \\ & & (0.37) & (1.53) & (0.55) & (0.92) & (0.86) & (0.53) & (0.36) & (0.31)\\ \cmidrule(lr){2-10} & \multirow{2}{*}{MeTaL} & 65.11(66.87/63.45) & 58.94(61.53/56.75) & \blue{75.32(76.71/73.99) } & \red{74.16(82.66/67.31) } & \blue{71.24(76.66/66.62) } & 42.26(62.82/31.84) & \blue{64.19(73.30/57.10) } & 62.13(60.26/64.13) \\ & & (0.69) & (3.22) & (0.20) & (1.02) & (1.14) & (0.49) & (0.52) & (0.40) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{FS} & \blue{67.34(70.05/64.83) } & \red{66.44(72.86/61.17) } & 59.38(71.35/50.94) & 44.12(73.05/31.62) & 38.57(61.91/28.09) & 13.80(72.63/7.62) & 49.79(75.45/37.17) & 42.45(44.03/41.51) \\ & & (0.75) & (1.40) & (1.30) & (1.15) & (2.55) & (0.23) & (1.03) & (5.05) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{HMM} & \red{67.49(71.26/64.14) } & \blue{63.31(70.95/57.33) } & 67.37(75.17/61.12) & 61.43(84.29/48.36) & 70.28(76.41/65.08) & 39.51(62.49/28.90) & 63.38(76.46/54.15) & 61.29(63.86/58.93) \\ & & (0.89) & (1.02) & (0.70) & (1.60) & (0.71) & (0.72) & (0.81) & (0.78) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{CHMM} & 66.72(67.17/66.27) & 63.06(62.12/64.11) & 75.21(76.61/73.88) & 72.96(81.31/66.25) & 71.17(76.66/66.43) & \blue{42.79(63.19/32.35) } & \colorbox{lightgray!60}{\red{64.58(74.77/56.84) }} & \red{65.26(61.99/68.91) } \\ & & (0.41) & (1.91) & (0.41) & (0.93) & (0.95) & (0.22) & (0.74) & (0.20) \\ \midrule[0.05pt] \midrule[0.05pt] \multicolumn{2}{c}{\multirow{2}{*}{BERT-ConNet}} & 67.83(69.37/66.40) & 64.18(72.17/57.92) & 72.87(73.25/72.60) & 71.40(80.30/64.56) & 67.32(73.60/62.14) & 42.37(62.88/31.95) & 64.12(74.03/56.56) & 60.36(57.81/63.21) \\ & & (0.62) & (1.71) & (0.91) & (1.81) & (1.24) & (0.72) & (0.51) & (0.61)\\ \bottomrule \end{tabular} } \label{tab:seq_full} \end{table*} \section{Key Information} \subsection{Dataset Documentations} The dataset is provided in \emph{json} format; there are three json files corresponding to the train, validation and test split. Each data point contains the following fields: \begin{itemize} \item \texttt{id}: unique identifier for the example; \item \texttt{label}: the label of the example; \item \texttt{weal\_labels}: the output of the labeling functions; \item \texttt{data}: a dictionary contains the raw data; \end{itemize} Details of each dataset can be found in App.~\ref{sec:dataset}. \subsection{Intended Uses} \textsc{Wrench}\xspace is intended for researchers in machine learning and related fields to innovate novel methods for the weak supervision problem and data scientists to apply machine learning algorithms which require manual annotations. \subsection{Hosting and Maintenance Plan} \textsc{Wrench}\xspace codebase is hosted and version-tracked via GitHub. It will be permanently available under the link \url{https://github.com/JieyuZ2/wrench}. The download link of all the datasets can be found in the Github repository. \textsc{Wrench}\xspace is a community-driven and open-source initiative. We are committed and has resources to maintain and actively develop \textsc{Wrench}\xspace for at minimum the next five years. We plan to grow \textsc{Wrench}\xspace by including new learning tasks and datasets. We welcome external contributors. Specifically, we plan to incorporate following aspects: \noindent \textbf{(1) Learning the dependency structure of supervision sources.} The dependency structure among supervision sources is frequently ignored in applications of weak supervision. However, as reported in \cite{MisspecificationInDP}, this unawareness and consequent dependency structure misspecification could result in a serious performance drop. To address this, several approaches have been proposed~\cite{Bach2017LearningTS,Varma2017InferringGM,Varma2019LearningDS}, but a benchmark for this purpose is missing. To complement this, we plan to add more datasets with varying dependency structure for benchmarking the dependency structure learning in weak supervision. \noindent \textbf{(2) Active generation and repurposing of supervision sources.} To further reduce human annotation efforts, very recently, researchers turn to active generation~\cite{varma2018snuba,TALLOR,glara,boecking2021interactive,darwin} and repurposing~\cite{DBLP:conf/naacl/GoelORVR21} of supervision sources. In the future, we plan to incorporate these new tasks and methods into \textsc{Wrench}\xspace to extend its scope. \noindent \textbf{(3) More applications of weak supervision.} \textsc{Wrench}\xspace focus on two applications of weak supervision: classification and sequence tagging. To unleash the potential of weak supervision and push the community move forwards, we plan to add more applications into \textsc{Wrench}\xspace in the future. \subsection{Licensing} We license our work using Apache 2.0\footnote{https://www.apache.org/licenses/LICENSE-2.0}. All the datasets are publicly released by previous work. \subsection{Author Statement} We the authors will bear all responsibility in case of violation of rights. \subsection{Limitations} Weak supervision is an increasing field, and there are important tasks and datasets yet to be included in \textsc{Wrench}\xspace. However, \textsc{Wrench}\xspace is an ongoing effort and we plan to continuously include more datasets and tasks in the future. \subsection{ Potential Negative Societal Impacts} \textsc{Wrench}\xspace does not involve human subjects research and does not contain any personally identifiable information. Possible misuse may lead to negative outcomes, such as direct usage of the model predictions to detect spam message without prior rigorous validation of the model performance. \section{Real-world Datasets} \label{sec:dataset} \subsection{Detailed Statistics and Visualization} We provide the detailed statistics of real-world datasets in Table~\ref{tab:dataset_stats_class}-\ref{tab:dataset_stats_seq}. We also visualize the dataset statistics in Fig.~\ref{fig:dataset_stats_bubble}, where each value is normalized to [0, 1] range across datasets. \begin{table*}[h] \centering \caption{Detailed statistics of classification datasets included in \textsc{Wrench}\xspace.} \scalebox{0.5}{ \begin{tabular}{ l l l c c c c c c c c c c} \toprule & \multicolumn{5}{c}{} & \multicolumn{4}{c}{\textbf{Avr. over LFs}} & \multicolumn{1}{c}{\textbf{Train}} & \multicolumn{1}{c}{\textbf{Dev}} & \multicolumn{1}{c}{\textbf{Test}} \\ \cmidrule(lr){7-10} \cmidrule(lr){11-11} \cmidrule(lr){12-12} \cmidrule(lr){13-13} \textbf{Task ($\downarrow$)} &\textbf{Domain ($\downarrow$)} & \textbf{Dataset ($\downarrow$)} & \textbf{\#Label} & \textbf{\#LF} & \textbf{Ovr. \%Coverage}& \textbf{\%Coverage} & \textbf{\%Overlap} & \textbf{\%Conflict} & \textbf{\%Accuracy} & \textbf{\#Data} & \textbf{\#Data} & \textbf{\#Data} \\ \midrule \multirow{1}{*}{Income Classification} & Tabular Data & Census~\cite{kohavi1996scaling, Awasthi2020Learning} & 2 & 83 & 99.13 & 5.41 & 5.34 & 1.50 & 78.74 & 10,083 & 5,561 & 16,281 \\\midrule[0.05pt] \midrule[0.05pt] \multirow{2}{*}{Sentiment Classification} & Movie & IMDb~\cite{IMDB,ren2020denoising} & 2 & 5 & 87.58 & 23.60 & 11.60 & 4.50 & 69.88 & 20,000 & 2,500 & 2,500 \\ & Review & Yelp~\cite{AGNews,ren2020denoising} & 2 & 8 & 82.78 & 18.34 & 13.58 & 4.94 & 73.05 & 30,400 & 3,800 & 3,800 \\\midrule[0.05pt] \midrule[0.05pt] \multirow{2}{*}{Spam Classification } & Review & Youtube~\cite{youtube} & 2 & 10 & 87.70 & 16.34 & 12.49 & 7.14 & 83.16 & 1,586 & 120 & 250 \\ & Text Message & SMS~\cite{sms, Awasthi2020Learning} & 2 & 73 & 40.52 & 0.72 & 0.29 & 0.01 & 97.26 & 4,571 & 500 & 2719 \\\midrule[0.05pt] \midrule[0.05pt] \multirow{1}{*}{Topic Classification} & News & AGNews~\cite{AGNews,ren2020denoising} & 4 & 9 & 69.08 & 10.34 & 5.05 & 2.43 & 81.66 & 96,000 & 12,000 & 12,000 \\\midrule[0.05pt] \midrule[0.05pt] \multirow{1}{*}{Question Classification} & Web Query & TREC~\cite{trec, Awasthi2020Learning} & 6 & 68 & 95.13 & 2.55 & 1.82 & 0.84 & 75.92 & 4,965 & 500 & 500 \\\midrule[0.05pt] \midrule[0.05pt] \multirow{4}{*}{Relation Classification} & News & Spouse~\cite{spouse, ratner2017snorkel} & 2 & 9 & 25.77 & 3.75 & 1.66 & 0.65 & -- & 22,254 & 2,811 & 2,701 \\ & Biomedical & CDR~\cite{davis2017comparative, ratner2017snorkel} & 2 & 33 & 90.72 & 6.27 & 5.36 & 3.21 & 75.27 & 8,430 & 920 & 4,673 \\ & Web Text & SemEval~\cite{hendrickx2010semeval, zhou2020nero} & 9 & 164 & 100.00 & 0.77 & 0.32 & 0.14 & 97.69 & 1,749 & 200 & 692 \\ & Chemical & ChemProt~\cite{chemprot,yu-etal-2021-fine} & 10 & 26 & 85.62 & 5.93 & 4.40 & 3.95 & 46.65 & 12,861 & 1,607 & 1,607 \\\midrule[0.05pt] \midrule[0.05pt] \multirow{3}{*}{Image Classification} & \multirow{3}{*}{Video} & Commercial~\cite{fu2020fast} & 2 & 4 & 100.00 & 54.51 & 53.09 & 12.51 & 91.33 & 64,130 & 9,479 & 7,496 \\ & & Tennis Rally~\cite{fu2020fast} & 2 & 6 & 100.00 & 66.86 & 66.86 & 16.76 & 81.70 & 6,959 & 746 & 1,098 \\ & & Basketball~\cite{fu2020fast} & 2 & 4 & 100.00 & 49.24 & 48.46 & 18.50 & 62.04 & 17,970 & 1,064 & 1,222 \\ \bottomrule \end{tabular} } \label{tab:dataset_stats_class} \end{table*} \begin{table*}[h] \centering \caption{Detailed statistics of sequence tagging datasets included in \textsc{Wrench}\xspace.} \scalebox{0.5}{ \begin{tabular}{ l l c c c c c c c c c c} \toprule \multicolumn{5}{c}{} & \multicolumn{4}{c}{\textbf{Avr. over LFs}} & \multicolumn{1}{c}{\textbf{Train}} & \multicolumn{1}{c}{\textbf{Dev}} & \multicolumn{1}{c}{\textbf{Test}} \\ \cmidrule(lr){6-9} \cmidrule(lr){10-10} \cmidrule(lr){11-11} \cmidrule(lr){12-12} \textbf{Domain ($\downarrow$)} & \textbf{Dataset ($\downarrow$)} & \textbf{\#Label} & \textbf{\#LF} & \textbf{Ovr. \%Coverage}& \textbf{\%Coverage} & \textbf{\%Overlap} & \textbf{\%Conflict} & \textbf{\%Precision} & \textbf{\#Data} & \textbf{\#Data} & \textbf{\#Data} \\ \midrule News & CoNLL-03~\cite{conll03,lison2021skweak} & 4 & 16 & 79.51 & 23.71 & 4.30 & 1.44 & 72.19 & 14,041 & 3250 & 3453 \\\midrule[0.05pt] \midrule[0.05pt] \multirow{2}{*}{Web Text} & WikiGold~\cite{wikigold,lison2021skweak} & 4 & 16 & 69.68 & 20.30 & 3.65 & 1.61 & 65.87 & 1,355 & 169 & 170 \\ & OntoNotes 5.0~\cite{weischedel2011ontonotes} & 18 & 17 & 66.79 & 12.45 & 1.55 & 0.54 & 54.84 & 115,812 & 5,000 & 22,897\\ \midrule[0.05pt] \midrule[0.05pt] \multirow{2}{*}{Biomedical} & BC5CDR~\cite{cdr,li2021bertifying} & 2 & 9 & 86.62 & 16.75 & 1.77 & 0.17 & 88.23 & 500 & 500 & 500 \\ & NCBI-Disease~\cite{dougan2014ncbi,li2021bertifying} & 1 & 5 & 77.15 & 21.16 & 1.40 & 0.18 & 74.88 & 592 & 99 & 99 \\\midrule[0.05pt] \midrule[0.05pt] \multirow{2}{*}{Review} & Laptop-Review~\cite{laptop,li2021bertifying} & 1 & 3 & 70.62 & 29.37 & 1.65 & 0.25 & 70.30 & 2,436 & 609 & 800 \\ & MIT-Restaurant~\cite{mitr,Awasthi2020Learning} & 8 & 16 & 47.84 & 2.87 & 0.37 & 0.06 & 76.65 & 7,159 & 500 & 1,521 \\\midrule[0.05pt] \midrule[0.05pt] Movie Query & MIT-Movies~\cite{mitmovie} & 12 & 7 & 64.14 & 16.60 & 5.29 & 0.97 & 75.10 & 9,241 & 500 & 2,441 \\ \bottomrule \end{tabular} } \label{tab:dataset_stats_seq} \end{table*} \begin{figure*}[h] \centering \includegraphics[width=1.0\textwidth]{figures/stats_all_bubble.png} \caption{Visualization of all statistics of datasets.} \label{fig:dataset_stats_bubble} \end{figure*} \subsection{Classification Datasets} \textbf{Census~\cite{kohavi1996scaling}}. This UCI dataset is extracted from the 1994 U.S. census. It lists a total of 13 features of an individual such as age, education level, marital status, country of origin etc. The primary task on it is binary classification - whether a person earns more than 50K or not. The train data consists of 32,563 records. The labeling functions are generated synthetically by \cite{Awasthi2020Learning} as follows: We hold out disjoint 16k random points from the training dataset as a proxy for human knowledge and extract a PART decision list~\cite{frank1998generating} from it as labeling functions. \textbf{SMS~\cite{sms}}. This dataset contains 4,571 text messages labeled as spam/not-spam, out of which 500 were held out for validation and 2719 for testing. The labeling functions are generated manually by \cite{Awasthi2020Learning}, including 16 keyword-based and 57 regular expression-based rules. \textbf{AGNews~\cite{AGNews}}. This dataset is a collection of more than one million news articles. It is constructed by \cite{ren2020denoising} choosing the 4 largest topic classes from the original corpus. The total number of training samples is 96K and both validation and testing are 12K. The labeling functions are also generated by \cite{ren2020denoising}, including 9 keyword-based rules. \textbf{Yelp~\cite{AGNews}}. This dataset is a subset of Yelp's businesses, reviews, and user data for binary sentiment classification. It is constructed by \cite{ren2020denoising}, including 30.4K training samples, 3.8K validation samples and 3.8K testing samples. The labeling functions are also generated by \cite{ren2020denoising}, including 7 heuristic rules on keywords and 1 third-party model on polarity of sentiment. \textbf{Youtube~\cite{youtube}}. This dataset is a public set of comments collected for spam detection. It has five datasets composed of 1,586 real messages extracted from five videos. The number of validation samples is 120 and that of testing samples is 250. The labeling functions are generated manually by \cite{ratner2017snorkel}, including 5 keyword-based, 1 regular expression-based, 1 heuristic, 1 complex preprocessors, and 2 third-party model rules. \textbf{IMDb~\cite{IMDB}}. This is a dataset for binary sentiment classification containing a set of 20,000 highly polar movie reviews for training, 2,500 for validation and 2,500 for testing. It is constructed by \cite{ren2020denoising}. The labeling functions are also generated by \cite{ren2020denoising}, including 4 heuristic rules on keywords and 1 heuristic rules on expressions. \textbf{TREC~\cite{trec}}. This dataset contains 4,965 labeled questions in the training set, 500 for validation set and another 500 for the testing set. It has 6 classes. The labeling functions are generated by \cite{Awasthi2020Learning}, including 68 keyword-based rules. \textbf{Spouse~\cite{spouse}}. This dataset is constructed by \cite{ratner2017snorkel} and to identify mentions of spouse relationships in a set of news articles from the Signal Media~\cite{spouse}. It contains 22,254 training samples 2,811 validation samples and 2,701 testing samples. The labeling functions are generated by Snorkel\footnote{\url{https://github.com/snorkel-team/snorkel-tutorials/tree/master/spouse}}. Note that the gold labels for the training set is not available. Therefore, we are unable to calculate the accuracy for labeling functions on the training set. \textbf{CDR~\cite{cdr}}. This dataset is constructed by \cite{ratner2017snorkel}, where the task is to identify mentions of causal links between chemicals and diseases in PubMed abstracts. It has 8,430 training samples 920 validation samples and 4,673 testing samples. The labeling functions can be found in Snorkel tutorial\footnote{\url{https://github.com/snorkel-team/snorkel-extraction/tree/master/tutorials/cdr}}. \textbf{SemEval~\cite{hendrickx2010semeval}}. This relation classification dataset is constructed by \cite{zhou2020nero} with 9 relation types. The size of the training, validation and test set are 1,749, 200 and 692 respectively. The labeling functions are generated by \cite{zhou2020nero}, including 164 heuristic rules. \textbf{ChemProt~\cite{chemprot}}. This is a 10-way relation classification dataset constructed by \cite{yu-etal-2021-fine}, containing 12,861 training samples, 1,607 validation samples and 1,607 testing samples. The labeling functions are generated by \cite{yu-etal-2021-fine}, including 26 keyword-based rules. \textbf{Basketball, Commercial, Tennis Rally~\cite{fu2020fast}}. These datasets are video frame classification datasets collected by~\cite{fu2020fast}. All the labeling functions are the same as previous work~\cite{fu2020fast}. Due to privacy issues, we only have access to the features extracted by a ResNet-101 model pre-trained on ImageNet. \subsection{Sequence Tagging Datasets} \textbf{CoNLL-03~\cite{conll03}}. This is a well-known open-domain NER dataset from the CoNLL 2003 Shared Task. It consists of 1393 English news articles and is annotated with 4 entity types: \emph{person}, \emph{location}, \emph{organization}, and \emph{miscellaneous}\footnote{In the original dataset, it has \texttt{-DOCSTART-} lines to separate documents, but these lines are removed here.}. Note that different papers~\cite{peng2019distantly,li2021bertifying,lison2020named} use different weak supervision sources with varying quality. In our study, we use the labeling function generated by~\cite{lison2020named} for fair comparison. (We use \texttt{BTC}, \texttt{core\_web\_md}, \texttt{crunchbase\_cased}, \texttt{crunchbase\_uncased}, \texttt{full\_name\_detector}, \texttt{geo\_cased}, \texttt{geo\_uncased}, \texttt{misc\_detector}, \texttt{wiki\_cased}, \texttt{wiki\_uncased}, \texttt{multitoken\_crunchbase\_cased}, \texttt{multitoken\_crunchbase\_uncased}, \texttt{multitoken\_geo\_cased}, \texttt{multitoken\_geo\_uncased}, \texttt{multitoken\_wiki\_cased}, \texttt{multitoken\_wiki\_uncased} as weak supervision sources) \textbf{BC5CDR~\cite{cdr}}. This dataset accompanies the BioCreative V CDR challenge and consists of 1,500 PubMed articles and is annotated with \emph{chemical} and \emph{disease} mentions. The labeling functions are selected from~\cite{li2021bertifying}. (We use \texttt{DictCore-Chemical}, \texttt{DictCore-Chemical-Exact}, \texttt{DictCore-Disease}, \texttt{DictCore-Disease-Exact}, \texttt{ Element, Ion, or Isotope}, \texttt{Organic Chemical}, \texttt{Antibiotic}, \texttt{Disease or Syndrome}, \texttt{PostHyphen}, \texttt{ExtractedPhrase} as weak supervision sources.) \textbf{NCBI-Disease~\cite{dougan2014ncbi}}. This dataset includes 793 PubMed abstracts annotated with \emph{disease} mentions only. The labeling functions are the same as~\cite{li2021bertifying}. \textbf{Laptop-Review~\cite{laptop}}. This dataset is from the SemEval 2014 Challenge, Task 4 Subtask 1 and consists of 3,845 sentences with \emph{laptop}-related entity mentions. The labeling functions are selected from~\cite{li2021bertifying}. (We use \texttt{CoreDictionary}, \texttt{ExtractedPhrase}, \texttt{ConsecutiveCapitals} as weak supervision sources.) \textbf{Wikigold~\cite{wikigold}}. This dataset contains a set of Wikipedia articles (40k tokens) randomly selected from a 2008 English dump and manually annotated with the four CoNLL-03 entity types. Since the label type of Wikigold is the same as CoNLL-03, we also use the labeling function provided in~\cite{lison2020named}. (We use \texttt{BTC}, \texttt{core\_web\_md}, \texttt{crunchbase\_cased}, \texttt{crunchbase\_uncased}, \texttt{full\_name\_detector}, \texttt{geo\_cased}, \texttt{geo\_uncased}, \texttt{misc\_detector}, \texttt{wiki\_cased}, \texttt{wiki\_uncased}, \texttt{multitoken\_crunchbase\_cased}, \texttt{multitoken\_crunchbase\_uncased}, \texttt{multitoken\_geo\_cased}, \texttt{multitoken\_geo\_uncased}, \texttt{multitoken\_wiki\_cased}, \texttt{multitoken\_wiki\_uncased} as weak supervision sources). \textbf{MIT-Restaurant~\cite{mitr}}. This is a slot-filling dataset includeing sentences about restaurant search queries. It contains 8 entity types with 9180 examples. We follow the data split by~\cite{Awasthi2020Learning} and use regular expression in their paper as weak supervision. Besides, we also extract \texttt{restaurant names} and \texttt{cuisines} from yelp database\footnote{\url{https://www.yelp.com/dataset}} to augment the labeling function.\footnote{In~\cite{Awasthi2020Learning,karamanolakis2021self,yu-etal-2021-fine}, they treat this dataset as token-level classification problem and use \emph{token-level} F1 score for evaluation, which is in conflict with the original evaluation protocol of the dataset~\cite{mitr}.} \textbf{MIT-Movies~\cite{mitmovie}}. This dataset includes sentences on movie search queries with 12 entity types. For this dataset, we curate the weak supervision via several class-related keywords, semantic patterns based on regular expressions (listed in table~\ref{tab:mitmovies}) and knowledge-base matching. Specifically, we collect the movie-related information on JsonMC\footnote{\url{https://github.com/jsonmc/jsonmc}}, Movies-dataset\footnote{\url{https://github.com/randfun/movies-dataset}}, IMDB\footnote{\url{https://www.imdb.com/}} and Wikidata\footnote{\url{https://www.wikidata.org/}}. There are 7 weak supervision sources in total. \textbf{Ontonotes 5.0~\cite{weischedel2011ontonotes}}. This is a fine-grained NER dataset with text documents from multiple domains, including broadcast conversation, P2.5 data and Web data. It consists of 113 thousands of training data and is annotated with 18 entity types. We adopt a set of the weak supervision sources presented in Skweak\footnote{\url{https://github.com/NorskRegnesentral/skweak}}~\cite{lison2021skweak} including \texttt{money\_detector}, \texttt{date\_detector}, \texttt{number\_detector}, \texttt{company\_type\_detector}, \texttt{full\_name\_detector}, \texttt{crunchbase\_cased}, \texttt{crunchbase\_uncased}, \texttt{geo\_cased}, \texttt{geo\_uncased}, \texttt{misc\_detector}, \texttt{wiki\_cased}, \texttt{wiki\_uncased} (12 in total). Since some of the weak supervision sources in Skweak only have coarse-level annotation (e.g. they can only label for entity types listed in CoNLL-03: \emph{person}, \emph{location}, \emph{organization}, and \emph{miscellaneous}), we follow the method used in~\cite{liang2020bond} to use SPARQL to query the categories of an entity in the knowledge ontology in wikipedia. Apart from it, we also extracted multi-token phrases from wikidata knowledge base and match with the corpus. Finally, we include several class-related keywords, and regular expressions (listed in table \ref{tab:ontonotes}) as the labeling functions. As a result, there are 17 weak supervision sources. To control the size for the validation set, we only use the first 5000 sentences as the validation set and put others into the test set. \section{Compared Methods} \label{sec:methods} \subsection{Classification} \subsubsection{Label Model} \textbf{MV / WMV:} We adopt the classic majority voting (MV) algorithm as one label model, as well as its extension weighted majority voting (WMV) where we reweight the final votes by the label prior. Notably, the abstaining LF, \ie, $\lambda_j = -1$ won't contribute to the final votes. \textbf{DS~\cite{DawidSkene}:} Dawid-Skene (DS) model estimates the accuracy of each LF with expectation maximization (EM) algorithm by assuming a naive Bayes distribution over the LFs' votes and the latent ground truth. \textbf{DP~\cite{Ratner16}:} Data programming (DP) models the distribution $p(L, Y)$ as a factor graph. It is able to describe the distribution in terms of pre-defined factor functions, which reflects the dependency of any subset of random variables. The log-likelihood is optimized by SGD where the gradient is estimated by Gibbs sampling, similarly to contrastive divergence~\cite{HintonCD}. \textbf{MeTaL~\cite{Ratner19}:} MeTal models the distribution via a Markov Network and recover the parameters via a matrix completion-style approach. Notably, it requires label prior as input. \textbf{FS~\cite{fu2020fast}:} FlyingSquid (FS) models the distribution as a binary Ising model, where each LF is represented by two random variables. A Triplet Method is used to recover the parameters and therefore no learning is needed, which makes it much faster than data programming and MeTal. Notably, FlyingSquid is designed for binary classification and the author suggested applying a one-versus-all reduction repeatedly to apply the core algorithm. The label prior is also required. \subsubsection{End Model} \textbf{LR:} We choose Logistic Regression (LR) as an example of linear model. \textbf{MLP:} For non-linear model, we take Multi-Layer Perceptron Neural Networks (MLP) as an example. \textbf{BERT~\cite{devlin2019bert} / RoBERTa~\cite{liu2019roberta}:} It is also interested how recent large scale pretrained language models perform as end models for textual datasets, so we include both BERT\cite{devlin2019bert} and RoBERTa~\cite{liu2019roberta}, shortened as \textbf{B} and \textbf{R} respectively. Notably, these pretrained language models can only work for textual datasets, and for text relation classification task, we adopt the R-BERT~\cite{wu2019enriching} architecture. \textbf{COSINE~\cite{yu-etal-2021-fine}}: COSINE uses self-training and contrastive learning to bootstrap over unlabeled data for improving a pretrained language model-based end model. We denote the BERT-based COSINE and the RoBERTa-based COSINE by \textbf{BC} and \textbf{RC} respectively. \subsubsection{Joint Model} \textbf{Denoise~\cite{ren2020denoising}:} Denoise adopts an attention network to aggregate over weak labels, and use a neural classifier to leverage the data feature. These two components are jointly trained in an end-to-end manner. \subsection{Sequence Tagging} \subsubsection{Label Models} \textbf{HMM~\cite{lison2020named}}: Hidden Markov models \cite{lison2020named} represent true labels as latent variables and inferring them from the independently observed noisy labels through unsupervised learning with expectation-maximization algorithm \cite{welch2003hidden}. \textbf{CHMM \cite{li2021bertifying}}: Conditional hidden Markov model (CHMM) \cite{li2021bertifying} substitutes the constant transition and emission matrices by token-wise counterpart predicted from the BERT embeddings of input tokens. The token-wise probabilities are representative in modeling how the true labels should evolve according to the input tokens. \subsubsection{End Model} \textbf{LSTM-CNNs-CRF~\cite{ma2016end}}: LSTM-CNNs-CRF encodes character-level features with convolutional neural networks (CNNs), and use bi-directional long short-term memory (LSTM) network~\cite{hochreiter1997long} to model word-level features. A conditional random field (CRF) layer~\cite{lafferty2001conditional} is stacked on top of LSTM to impose constraints over adjacent output labels. \textbf{BERT}: It use the pre-trained BERT~\cite{devlin2019bert} to leverage the pre-trained context knowledge stored in BERT. In our experiments, we use \texttt{BERT-base-cased} for other datasets as our encoder, and we stack a linear layer to predict token labels. We also run experiment on stacking a CRF layer on the top of the model, and report the best performance. \subsubsection{Joint Model} \textbf{ConNet~\cite{lan2020connet}}: Consensus Network (ConNet) adopts a two-stage training approach for learning with multiple supervision signals. In the decoupling phase, it trains BiLSTM-CNN-CRF \cite{ma2016end} with multiple parallel CRF layers for each labeling source individually. Then, the aggregation phase aggregates the CRF transitions with attention scores and outputs a unified label sequence. \section{Adapting Label Model for Sequence Tagging Problem} \label{sec:adapt} \subsection{Label Correction Technique} One of the main difference between sequence tagging and classification is that for sequence tagging, there is a specific type `\texttt{O}' which indicates the token does not belong to any pre-defined types. Therefore, if one token cannot be matched with all labeling functions, it will be automatically labeled as type `\texttt{O}'. In our study, we use a label correction technique to differentiate the type `\texttt{O}' with \texttt{Abstain} as follows: $$ l_{i,c}= \begin{cases}l_{i,c} & \text { if } L_{i, c}\neq\text{O}; \\ \texttt{Abstain (-1)} & \text { if } L_{i, c}=\text{ O and } \exists ~ c^{'} \in [1, n] \text{ s.t. } L_{i, c^{'}}\neq\text{O;} \\ \text{O} & \text { otherwise }\end{cases} $$ We have also tried on another choice that regard the weak label for tokens that cannot be matched with any labeling functions as `\texttt{O}'. The comparison of results is in table~\ref{tab:seq_adapt}. From the table, it is clear that when regarding unmatched token as type \texttt{O}, it leads to a drastic decrease in the final performance. Specifically, the recall of the model is much lower since most of the tokens will be recognized as \texttt{O} when without label modification. One exception is in DS method, as it achieve better performance on 4 out of 8 datasets without label modification. However, when without label modification is better, the performance gain between the two methods is between 0.56\% -- 5.66\% in terms of F1 score. In contrast, when using label modification is better, the gain on F1 score is much larger, i.e., between 4.76\% -- 42.34\%. Therefore, using label modification is generally a better way to adapt label models for classification to sequence tagging problems, and we use this technique in our experiments by default. \subsection{Comparision of IO and BIO Tagging Schema} We also compare the performance of label model with IO and BIO tagging scheme, as both of them have been adopted in previous studies~\cite{safranchik2020weakly,lison2020named,li2021bertifying}. The comparision result is shown in table~\ref{tab:seq_adapt}. From the result, we find that for datasets that when the number of entity types is small (\eg~ BC5CDR, NCBI-Disease), using IO schema leads to higher F1 score. For other datasets, there is no clear winner, as IO schema excels on Ontonotes and MIT-Restaurants datasets while BIO performs better on the others. To conclude, the optimal tagging scheme is highly data-dependent and we use IO tagging schema in our experiments. \begin{table*}[t] \centering \caption{\textbf{Sequence Tagging.} The comparison of label models with or without label modification and IO/BIO tagging scheme. The number stands for the F1 score (Precision, Recall) with standard deviation in the bracket under each value. Each metric value is averaged over 5 runs.} \scalebox{0.41}{ \begin{tabular}{ l c c c c c c c c c } \toprule \textbf{End Model ($\downarrow$)} &\textbf{Label Model ($\downarrow$)} & \textbf{CoNLL-03} & \textbf{WikiGold} & \textbf{BC5CDR} & \textbf{NCBI-Disease}& \textbf{Laptop-Review} & \textbf{MIT-Restaurant} & \textbf{MIT-Movies} & \textbf{Ontonotes 5.0} \\ \midrule \multirow{12}{*}{IO Schema, with Label Modification} & \multirow{2}{*}{MV} & 60.36(59.06/61.72) & 52.24(48.95/56.00) & 83.49(91.69/76.64) & {78.44(93.04/67.79) } & {73.27(88.86/62.33) } & 48.71(74.25/36.24) & 59.68(69.92/52.05) & 58.85(54.17/64.40) \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{WMV} & 60.26(59.03/61.54) & 52.87(50.74/55.20) & 83.49(91.66/76.66) & {78.44(93.04/67.79) } & {73.27(88.86/62.33) } & {48.19(73.73/35.80) } & 60.37(70.98/52.52) & 57.58(53.15/62.81) \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{DS} & 46.76(45.29/48.32) & 42.17(40.05/44.53) & 83.49(91.66/76.66) & {78.44(93.04/67.79) } & {73.27(88.86/62.33) } & 46.81(71.71/34.75) & 54.06(63.64/46.99) & 37.70(34.33/41.82) \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{DP} & 62.43(61.62/63.26) & 54.81(53.10/56.64) & {83.50(91.69/76.65) } & {78.44(93.04/67.79) } & {73.27(88.86/62.33) } & 47.92(73.24/35.61) & 59.92(70.65/52.01) & {61.85(57.44/66.99) } \\ & & (0.22) & (0.13) & (0.00) & (0.00) & (0.00) & (0.00) & (0.43) & (0.19) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{MeTaL} & 60.32(59.07/61.63) & 52.09(50.31/54.03) & 83.50(91.66/76.67) & {78.44(93.04/67.79) } & 64.36(83.21/53.63) & 47.66(73.40/35.29) & 56.60(72.28/47.70) & 58.27(54.10/63.14) \\ & & (0.08) & (0.23) & (0.00) & (0.00) & (17.81) & (0.00) & (7.71) & (0.48) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{FS} & {62.49(63.25/61.76) } & {58.29(62.77/54.40) } & 56.71(88.03/41.83) & 40.67(72.24/28.30) & 28.74(60.59/18.84) & 13.86(84.10/7.55) & 43.04(77.73/29.75) & 5.31(2.87/35.74) \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \midrule \multirow{12}{*}{BIO Schema, with Label Modification} & \multirow{2}{*}{MV} & 61.73 (59.70/63.89) & 55.30 (51.02/59.73) & {81.71 (88.22/76.09)} & 72.37 (82.74/64.30) & 67.43 (79.01/58.80) & {{47.55 (72.91/35.29)}} & 59.78 (70.38/51.95) & 57.89 (52.68/64.24)\\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{WMV} & 60.93 (58.46/63.62) & 54.54 (50.57/59.20) & 80.45 (86.04/75.54) & 76.50 (89.61/66.84) & 62.09 (70.57/55.43) & {47.07 (72.33/34.91)} & 60.22 (70.88/52.35) & 56.81 (51.85/62.84)\\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{DS} & 47.29 (45.52/49.20) & 40.82 (37.50/44.80) & 81.54 (87.32/76.48) & 77.54 (91.15/67.47) & 72.25 (86.53/62.02) & 35.60 (52.71/26.88) & 55.86 (65.76/48.54) & 39.25 (36.55/42.39) \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{DP} & {{63.62 (61.83/65.52)}} & 55.40 (51.86/59.46) & 83.08 (90.16/77.03) & 76.69 (89.56/67.05) & 62.15 (70.70/55.44) & 46.93 (72.15/34.78) & 60.14 (70.60/52.39) & 61.66 (56.55/67.85)\\ & & (0.10) & (0.07) & (0.09) & (0.03) & (0.00) & (0.02) & (0.05) & (0.04) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{MeTaL} & 61.69 (59.57/63.95) & 54.63 (51.07/58.73) & 80.64 (86.44/75.65)& 76.37 (89.26/66.73) & 64.79 (77.84/55.43) & 46.69 (72.22/34.50) & 60.24 (70.90/52.37) & 58.75 (53.65/62.68) \\ & & (0.08) & (0.04) & (0.12) & (0.03) & (0.01) & (0.03) & (0.11) & (0.38) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{FS} & 61.97 (62.34/61.59) & {57.21 (61.73/53.33)} & 56.77 (91.29/41.19) & 42.66 (88.67/28.09) & 60.89 (73.01/52.20) & 13.29 (86.31/7.20) & 42.27 (77.22/29.09) & 8.01 (4.60/31.06) \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \midrule \multirow{12}{*}{IO Schema, without Label Modification} & \multirow{2}{*}{MV} & 8.10 (85.41/4.25) & 8.14 (88.89/4.27) & 0.04 (100.00/0.02) & 6.68 (80.49/3.48) & 29.71 (70.29/18.84) & 0.00 (0.00/0.00) & 8.93 (81.29/4.72) & 0.00 (0.00/0.00) \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{WMV} & 0.00 (0.00/0.00) & 0.00 (0.00/0.00) & 0.00 (0.00/0.00) & 0.00 (0.00/0.00) & 0.46 (30.00/0.46) & 0.00 (0.00/0.00) & 0.00 (0.00/0.00) & 0.00 (0.00/0.00) \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{DS} & 49.24 (49.84/48.65) & 41.38 (42.86/40.00) & 71.84 (89.25/60.12) & 56.69 (83.23/42.98) & 29.91 (77.56/18.53) & 41.26 (77.45/28.12) & 51.10 (73.27/39.24) & 40.83 (42.68/39.14)\\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{DP} & 7.74 (84.50/4.05) & 8.65 (94.44/4.53) & 0.04 (100.00/0.02) & 6.73 (100.00/3.48) & 30.45 (79.35/18.84) & 0.00 (0.00/0.00) & 8.73 (83.10/4.61) & 0.00 (0.00/0.00)\\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{MeTaL} & 6.59 (88.72/3.42) & 7.57 (92.50/3.95) & 0.01 (20.00/0.00) & 6.73 (100.00/3.48) & 30.20 (78.71/18.68) & 0.00 (0.00/0.00) & 8.17 (82.22/4.30) & 0.00 (0.00/0.00)\\ & & (0.08) & (0.04) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \cmidrule(lr){2-10} & \multirow{2}{*}{FS} & 50.77 (81.11/36.95) & 45.44 (83.57/31.20) & 19.27 (88.91/10.80) & 42.87 (93.93/27.77) & 29.81 (78.95/18.38) & 0.00 (0.00/0.00) & 26.36 (83.18/15.66) & 27.28 (70.04/16.94) \\ & & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) & (0.00) \\ \bottomrule \end{tabular} } \label{tab:seq_adapt} \end{table*} \section{Implementation Details} \subsection{Hardware and Implementation} Our models are implemented based on Python and PyTorch. For gradient-based optimization, we adopt AdamW Optimizer and linear learning rate scheduler; and we early stop the training process based on the evaluation metric values on validation set. For all the compared methods, we either re-implement them based on official released code or create an interface for calling their official implementations. For fine-tuning pre-trained language models, we use the dumps provided by HuggingFace\footnote{https://huggingface.co/models}. We use a pre-trained BERT model\footnote{https://huggingface.co/bert-base-cased} to extract features for textual classification datasets. For text classification dataset, we use the outputting embedding of the [CLS] token as data feature; for relation classification, we follow the R-BERT~\cite{wu2019enriching} to use the concatenation of embeddings of [CLS] and the two entity tokens as data feature. Other features, \eg, TF-IDF feature, or other pre-trained language models are also supported in \textsc{Wrench}\xspace. All experiments are run on CPUs or 64 Nvidia V100 GPUs (32GB VRAM) on Microsoft Azure. \subsection{Hyper-parameter Search Space} For each model, we use grid search to find the best hyer-parameters on validation set. For each trial, we repeat 3 runs with different initializations and for final evaluation, we repeat 5 runs with different initializations. The search space is based on the suggestions in original paper and can be found in Table~\ref{tab:search}. \begin{table*}[h] \centering \caption{The hyper-parameters and search space. Note that the ConNet shares search space of other parameters with its backbone, \ie, LSTM-CRF/BERT-CRF.} \ra{1.2} \scalebox{0.67}{ \begin{tabular}{ l l l l } \toprule \textbf{Model} & \textbf{Hyper-parameter} &\textbf{Description} & \textbf{Range} \\ \midrule \multirow{3}{*}{MeTal} & \texttt{lr} & learning rate & 1e-5,1e-4,1e-3,1e-2,1e-1\\ & \texttt{weight\_decay} & weight decay & 1e-5,1e-4,1e-3,1e-2,1e-1\\ & \texttt{num\_epoch} & the number of training epochs & 5,10,50,100,200\\ \midrule[0.05pt] \midrule[0.05pt] \multirow{3}{*}{DP} & \texttt{lr} & learning rate & 1e-5,5e-5,1e-4\\ & \texttt{weight\_decay} & weight decay & 1e-5,1e-4,1e-3,1e-2,1e-1\\ & \texttt{num\_epoch} & the number of training epochs & 5,10,50,100,200\\ \midrule[0.05pt] \midrule[0.05pt] \multirow{3}{*}{LogReg} & \texttt{batch\_size} & {the input batch\_size} & 32,128,512\\ & \texttt{lr} & learning rate & 1e-5,1e-4,1e-3,1e-2,1e-1\\ & \texttt{weight\_decay} & weight decay & 1e-5,1e-4,1e-3,1e-2,1e-1\\ \midrule[0.05pt] \midrule[0.05pt] \multirow{5}{*}{MLP} & \texttt{batch\_size} & {the input batch\_size} & 32,128,512\\ & \texttt{lr} & learning rate & 1e-5,1e-4,1e-3,1e-2,1e-1\\ & \texttt{weight\_decay} & weight decay & 1e-5,1e-4,1e-3,1e-2,1e-1\\ &\texttt{ffn\_num\_layer} & {the number of MLP layers} & 2 \\ &\texttt{ffn\_hidden\_size} & {the hidden size of MLP layers} & 100\\ \midrule[0.05pt] \midrule[0.05pt] \multirow{2}{*}{BERT} & \texttt{batch\_size} & {the input batch\_size} & 16,32\\ & \texttt{lr} & learning rate & 2e-5,3e-5,5e-5\\ \midrule[0.05pt] \midrule[0.05pt] \multirow{8}{*}{COSINE} & \texttt{batch\_size} & {the input batch\_size} & 32\\ & \texttt{lr} & learning rate & 1e-6,1e-5\\ & \texttt{weight\_decay} & weight decay & 1e-4\\ & $T$ & the period of updating model & 50,100,200\\ & $\xi$ & the confident threshold & 0.2,0.4,0.6,0.8\\ & $\lambda$ & the weight for confident regularization & 0.01,0.05,0.1\\ & $\mu$ & the weight for contrastive regularization & 1\\ & $\gamma$ & the margin for contrastive regularization & 1\\ \midrule[0.05pt] \midrule[0.05pt] \multirow{9}{*}{Denoise} & \texttt{batch\_size} & {the input batch\_size} & 32,128,512\\ & \texttt{lr} & learning rate & 1e-4,1e-3,1e-2\\ & \texttt{weight\_decay} & weight decay & 0.0\\ & \texttt{alpha} & momentum term for temporal ensembling & 0.6\\ & \texttt{c1} & coefficient of denoiser loss & 0.1,0.3,0.5,0.7,0.9\\ & \texttt{c2} & coefficient of classifier loss & 0.1,0.3,0.5,0.7,0.9\\ & \texttt{c3} & coefficient of unsupervised self-training loss & 1-\texttt{c2}-\texttt{c1}\\ &\texttt{ffn\_num\_layer} & {the number of MLP layers} & 2 \\ &\texttt{ffn\_hidden\_size} & {the hidden size of MLP layers} & 100\\ \midrule[0.05pt] \midrule[0.05pt] \multirow{13}{*}{LSTM-CRF} & \texttt{batch\_size} & {the input batch\_size} & 16,32,64\\ & \texttt{lr} & learning rate & 1e-2,5e-3,1e-3\\ & \texttt{weight\_decay} & weight decay & 1e-8\\ & \texttt{dropout} & dropout ratio & 0.0,0.5\\ &\texttt{word\_feature\_extractor} & {the word feature extractor layers} & LSTM,GRU\\ &\texttt{word\_embed\_dimension} & {the embedding dimension of word} & 100\\ &\texttt{LSTM/GRU\_hidden\_size} & {the hidden size of LSTM/GRU layers} & 200\\ &\texttt{num\_hidden\_layer} & {the number of LSTM/GRU layers} & 1\\ &\texttt{LSTM/GRU\_hidden\_size} & {the hidden size of LSTM/GRU layers} & 200\\ &\texttt{num\_hidden\_layer} & {the number of LSTM/GRU layers} & 1\\ &\texttt{char\_feature\_extractor} & {the character feature extractor layers} & CNN\\ &\texttt{char\_embed\_dimension} & {the embedding dimension of character} & 30\\ \midrule[0.05pt] \midrule[0.05pt] \multirow{5}{*}{BERT-CRF} & \texttt{batch\_size} & {the input batch\_size} & 16,32,8\\ & \texttt{lr} & learning rate & 2e-5,3e-5,5e-5\\ & \texttt{lr\_crf} & learning rate for the CRF layer & 1e-3,5e-3,1e-2\\ & \texttt{weight\_decay} & weight decay & 1e-6\\ & \texttt{weight\_decay\_crf} & weight decayfor the CRF layer & 1e-8\\ \midrule[0.05pt] \midrule[0.05pt] \multirow{2}{*}{HMM} & $\gamma$ & {redundency factor} & 0,0.1,0.3,0.5,0.7,0.9\\ & \texttt{num\_epoch} & the number of training epochs & 50\\ \midrule[0.05pt] \midrule[0.05pt] \multirow{5}{*}{CHMM} & \texttt{batch\_size} & {the input batch\_size} & 16,64,128\\ & \texttt{nn\_lr} & learning rate of NN & 1e-3,5e-4,1e-4\\ & \texttt{hmm\_lr} & learning rate of HMM & 1e-2,5e-3,1e-3\\ & \texttt{num\_pretrain\_epoch} & the number of pre-training epochs & 2,5\\ & \texttt{num\_epoch} & the number of training epochs & 50\\ \midrule[0.05pt] \midrule[0.05pt] \multirow{1}{*}{ConNet} & \texttt{n\_steps\_phase1} & {the number of training steps of phase1} & 200,500,1000\\ \bottomrule \end{tabular} } \label{tab:search} \end{table*} \subsection{Parameters for studies in Sec.~\ref{sec:generator}} \label{sec:para_study} \textbf{Fig.~\ref{fig:syn} (a)} We generate 10 labeling functions; 5 for positive label and 5 for negative label. The mean accuracy, mean propensity and radius of propensity is set to 0.75, 0.1, 0.0, respectively. \textbf{Fig.~\ref{fig:syn} (b)} We generate 10 labeling functions; 5 for positive label and 5 for negative label. The mean accuracy, radius of accuracy and radius of propensity is set to 0.75, 0.1, 0.0, respectively. \textbf{Fig.~\ref{fig:semi}} The minimum propensity of candidate LFs is 0.1. The minimum accuracy is set to be the label prior plus 0.1, \eg, for LFs labeling positive label, the minimum accuracy is $P(y=1)+0.1$. For ($n$, $m$)-gram features, $n$ is set to 1 and $m$ is 2.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,630
HOLY HOLE IN THE GROUND, BATMAN! IT'S A VERBOSE UPDATE OF MY HONOURS PROJECT - POSSIBLY THE MOST IMPORTANT WORK ON VIDEOGAME MUSIC OF OUR TIMES! (Sorry Caps). In my project I propose to examine the role that music plays in the genre of videogames known as First Person Shooters (FPS). Specifically I will examine in detail the score of the Xbox game Halo 2, published by Microsoft Game Studios, and the complex relationships with other elements of the game. The Halo series of games are the Videogame equivalent of the Hollywood Blockbuster – they have some of the highest production values of the current generation of games and spend more time under development than their filmic counterparts. Videogame analysis and critique, being a burgeoning field, has been developing a vocabulary and methodology with which to approach Videogames and their methods of creating meaning and story. Most analyses and critiques can be placed in one of two camps; those that study a Videogames efforts at inscribing a narrative in a game, identified as 'Narratology', often drawing on methods of analysis for other media, such as films, literature and especially hypertext and new media. The alternative, calling their study of games as systems unrelated to story 'Ludology', is exactly that, to treat Videogames as systems and representations, examining the rules and laws that govern the simulation. My analysis of Halo and the music of the series involves a number of key elements. I will apply traditional musical analysis techniques to the music of Halo 2, looking at the function played by musical devices such as 'Leitmotif', as well as examining the effect of style and genre connotations. I will also apply a method of analysis that quantifiably examines the music in terms of 'what plays when' and try to find some motivation as to 'why' it does so. I wish to also apply an examination of the evolution of the music across the series, including key musical motifs and pieces, and view it as not only a development of the composers 'voice' but also as a representation of the progression of the themes of the games. In a non traditional vein, I wish to also attempt to identify important salient features of the relationship between the music and other non-musical aspects of the game. For example, I wish to analyse specific levels of the game by using concepts such as 'level flow', 'progression' and 'optimal paths or strategies', in an attempt to uncover meaningful relationships to the music. One tool I will use to do this is a world famous video recording of a 'Speed Run' in which the player, an American film student and Halo player, completes the game on the hardest difficulty setting (called 'Legendary difficulty') and never once dies, providing an insight into the importance of elements of the structure and layout of the levels, as well as combat techniques. The most reduced and implicit form of feedback that the player receives from the game is information about the player's state; either the player is still alive and can continue on 'progressing' through the game to its conclusion, or the player is dead and must try again. This information reveals inarticulate or tacit aspects of the game that the designers intentionally and unintentionally included in the game, and which I believe are a vital aspect of the 'meaning' created by the game. The crux of the rationale for this approach is a belief that Halo 2 locates much of its created meaning (in a largely non-narrative sense) outside of traditional narrative structures and devices such as dialogue, narration and cinematic direction. In this way, I believe that I will see a parallel to meanings and ideas created by and revealed in the music. I will also apply semi-filmic analyses, adapting concepts such as Mimesis and Diegesis, along with analyses of such things as the effects of geography, art direction and the Videogame equivalent of camera angles to specific levels and sections of the game. Throughout I will apply Ian Bogost' theory of 'unit operations' as an approach to Videogame criticism, as well as his idea of 'simulation fever', especially in regards to the ideas of Mimesis and Diegesis in Videogames – whether videogames 'tell' the story to the player (Diegesis) or 'show' the story to the player by mimicking actions (Mimesis). To explain the relevance of this distinction, I quote Wikipedia's entry on Diegesis: When we come to a modern consideration of the cinema, it may appear that the medium is a straight-forward example of mimetic storytelling--but it is not. In terms of classical poetics, the cinema is an epic form that utilizes dramatic elements; this is determined by the technologies of the camera and editing. Even in a spatially and temporally continuous scene (mimicking the theatrical situation, as it were), the camera chooses where to look for us. In a similar way, editing causes us to jump from one place (and time sometimes) to another, whether it be somewhere else in the room, or across town. This jump is a form of narration; it is as if a narrator whispers to us: "meanwhile, on the other side of the forest". By this definition it would seem at first glance that First Person Shooter games (and Halo in particular) are generally 'Diegetic' with only a short amount of 'cut scenes' at the beginning and end of levels determining what the player has to look at. However, if we consider Bogost's notion of simulation fever, which says that, while we generally view simulations, and Halo could be considered as such, as 'objective' representations of what we are simulating, they are in actual fact necessarily 'subjective' by virtue of their nature as reductive. Even when a simulation renders every physical aspect of an environment or situation in detail, it still does not include such things as concepts of value, such as the value of human life, which Bogost says by referencing a military simulation of a sarin nerve gas cloud modelling simulation. While it arguable simulates accurately the progression of the gas on a University campus and how to evacuate quickest, it does not represent such things as who to prioritise for evacuation; the Nobel prize winning Professors or the students with years of possible contribution in their lives still left unrealised. In this way, by choosing what Halo includes and excludes in its 'simulation', Halo 2 could certainly be considered also Mimetic, and this seemingly contradictory state will be explored. To sum up my project, I am looking at the music of Halo 2 and placing an emphasis on identifying other aspects of meaning creation in the non-musical elements of the game, applying current theories and concepts of videogame criticism as well as a number of adapted methods from other mediums. I wish to outline the relationships between the music and these other elements and hopefully persuade the reader that a Videogame about a super-human soldier in the future whose task it is to save the world is a work worthy of investigation and which locates its meaning and value in places non typical to media such as films and novels. Word! Did you read it all? If so, give yourself a pat on the back for getting through over 1,000 of the finest words on Teh Interwebs without exploding. Feel free to comment / flame / ignore this post. Oh, and the thing about Heiddeger I mentioned last week? That's so last week. It's MIMESIS now. - Ben. Posted by Ben Abraham at 11:34 pm 7 comments: Labels: broken promises, Honours, Music, Project, Proposal, Videogames 2 months of catchup! Wow, no posts since December! How to cover such a vast backlog of things to note and write down? --== BULLETIN POINTS! ==-- WoW. Lots of WoW. Christmas: New PC, Dual Core for all my musical applications. Turned 21 in Vietnam, couldn't ask for a more interesting day! The rest of Vietnam was good, very mind expanding and eye opening. Assassins Creed - that was fun. For about 3 days... pity. More WoW. Uni begins - honours is mind expanding. Well, that's about it (ha!) but I've got a really cool idea for a post/mini essay that I'm going to write up in the next few days (hopefully) about practice led research, inspired by an article by Prof. Barbara Bolt, who was in turn inspired by Heideggers notions of Handlability. Tying into that is the idea of 'Semiotic Domains', a topic upon which I have been enlightened by a chapter in the 'The Game Design Reader: A Rules of Play Anthology', edited by Salem and Zimmerman, though the chapter is not theirs. I think it has practical applications for the study of videogames. I'm sure there's more and i've totally forgotten a lot of important details, but it'll all come good in the end. Posted by Ben Abraham at 8:46 pm No comments:
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,297
{"url":"https:\/\/www.gradesaver.com\/textbooks\/math\/algebra\/algebra-1-common-core-15th-edition\/chapter-1-foundations-for-algebra-1-5-adding-and-subtracting-real-numbers-practice-and-problem-solving-exercises-page-36\/82","text":"## Algebra 1: Common Core (15th Edition)\n\n$\\sqrt 21$ is an unending decimal, so it must only be an irrational number.","date":"2018-08-16 01:24:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7708120942115784, \"perplexity\": 1811.2974805684305}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-34\/segments\/1534221210387.7\/warc\/CC-MAIN-20180815235729-20180816015729-00438.warc.gz\"}"}
null
null
Die Liste der Biografien führt alle Personen auf, die in der deutschsprachigen Wikipedia einen Artikel haben. Dieses ist eine Teilliste mit 2 Einträgen von Personen, deren Namen mit den Buchstaben "Prl" beginnt. Prl Prla Prlainović, Andrija (* 1987), serbischer Wasserballer Prli Prlić, Jadranko (* 1959), bosnisch-kroatischer Politiker und Kriegsverbrecher
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,369