text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
\section{Introduction} CU Virginis (HD124224, hereafter CU Vir) is unusual in that it is a stable source of coherent, polarised radio emission. Observations in the 13, 20 and 50\,cm radio bands have revealed one or more short duty-cycle emission peaks per rotation period (0.5207 days) that re-appear at the same rotation phases \cite{tll+00,tlu+08,gs10}. These peaks are distinguished from the quiescent emission also observed from the source by their 100$\%$ circular polarisation, short durations ($<$1 hour) and flux densities of up to a factor of six above the quiescent levels. Together, we refer to the peaks as the CU Vir pulse. As a magnetic chemically peculiar A0 (sometimes B9) star, CU Vir is thought to possess an offset dipole magnetic field \cite{d+52,h+97}, with a pole strength of $\sim$3\,kG \cite{tll+00}, misaligned from the rotation axis \cite{bl+80}. Various stellar parameters are summarised in Table 1, with errors in the last decimal places given in parantheses. The pulses are thought to be emitted in the vicinity of one magnetic pole \cite{tll+00}. The pulse emission geometry is strikingly similar to the canonical model for radio pulsars (e.g. Manchester \& Taylor 1977)\nocite{mt+77}. The beaming of the CU Vir pulsed emission, the 100$\%$ circular polarisation of the pulse, the high brightness temperature of $>$10$^{12}$\,K and the flat spectrum between 13 and 20\,cm are all indicative of coherent emission from an electron-cyclotron maser (ECM) mechanism \cite{tll+00,kgb+07,tlu+08}. ECM emission is characterised by narrow bandwidths corresponding to either the fundamental or any of the first few harmonics of the local cyclotron frequency, \begin{equation} \nu_{B}=\frac{q_{e}B}{2\pi m_{e}}, \end{equation} where $q_{e}$ and $m_{e}$ are the elementary charge and electron mass respectively, and $B$ is the magnetic field strength \cite{md+82}. \nocite{s+98} \begin{table} \begin{center} \caption{Stellar parameters of CU Vir from Trigilio et al. (2000) and SIMBAD.} \label{table1} \begin{tabular}{ll} \hline Property & Value \\ \hline Position (J2000) & 14:12:15.80, +02:24:34.0 \\ Spectral type & A0p ($\alpha^{2}$ CVn) \\ Distance & 80(6) pc \\ Rotation period & 0.5207 days \\ Stellar radius & 2.2(2) solar radii\\ Magnetic field strength & 3.0(2)$\times10^{3}$ G \\ Mass (1) & $\sim$3 solar masses \\ \hline \end{tabular} \end{center} \medskip (1) This mass estimate is from St{\c e}pie{\'n} (1998). \end{table} By comparing 20\,cm observations of two pulses in 1999 with a pulse recorded in 1998, Trigilio et al. (2008) measured a radio period that was 1.2\,s slower than the latest optical rotation period \cite{prm+98}. The Pyper et al. (1998) rotation ephemeris for CU Vir was derived using data extending up to 1997, and no new optical rotation ephemeris has since been published. Pyper et al. (1998) also found that the optical period itself had reduced by $\sim$2\,s in 1998. These periodicity `glitches' are not well understood. Here we present new observations of CU Vir pulse profiles in the 13 and 20\,cm bands. In \S2 we detail the observations and data reduction. We describe our results, for both our data and for archival data, in \S3, and confirm the radio period discrepancy reported by Trigilio et al. (2008). In \S4 we discuss implications for the CU Vir magnetic field, and, in \S5, we outline various hypotheses for the period discrepancy. We present our conclusions and discuss future work in \S6. \section{Observations and data analysis} We observed CU Vir on 2008 October 31 with the Australia Telescope Compact Array (ATCA). The array of six 22-metre dishes, in its fully-extended 6A configuration\footnote{http://www.narrabri.atnf.csiro.au/observing/}, recorded data in 32 channels across 128\,MHz bandwidths centred at 1.384\,GHz (20\,cm) and 2.638\,GHz (13\,cm). Complex cross-correlations (visibilities), integrated over 10\,s intervals, were recorded for all baselines in all Stokes parameters. The absolute flux density scale and the frequency response over the receiver bandpasses were characterised using the radio galaxy PKS B1934$-$638. The gains of the individual antennas, atmospheric and instrumental path-length variations for each antenna and signal path, and the cross-talk between the orthogonal linear feeds in each antenna were calibrated using five-minute observations of the radio galaxy PKS B1416$-$067 at 20-minute intervals during the observations. The total observing time on CU Vir was 7.75 hours. We pointed the antennas 10\arcsec~South of CU Vir to avoid source contamination by `non-closing' correlator offsets. We reduced our data using the MIRIAD set of software routines \cite{stw95}. In order to optimise our source-subtraction technique, the four shortest baselines in each configuration were removed from the data, along with radio-frequency interference. Multi-frequency synthesis images with extent four primary beam full-width half-maxima were made of the source field in the 13 and 20\,cm bands. Attenuation caused by the spatial response of the primary beam was corrected for. All sources detected at greater than five standard deviations of the noise in the images were subtracted. Our subtraction technique involved producing CLEAN models of each source and subtracting them from the visibility data using the MIRIAD task UVMODEL. We then shifted the phase centre of the visibilities to the CU Vir position, and inspected the time-series of the real components of the visibilities, averaged over two-minute intervals, for Stokes V pulses. Having identified the pulse durations, we imaged the off-pulse emission, produced CLEAN models in the 13 and 20\,cm bands and subtracted them from the visibility data. Lightcurves of the pulsed emission were then produced in Stokes I and V from the source-subtracted visibility data by averaging the real visibility components over two-minute intervals. Stokes I lightcurves of the off-pulse emission were also produced by averaging the real visibility components of the off-pulse datasets over 50 minute intervals and the errors were scaled appropriately. We repeated our data reduction procedure for the CU Vir datasets (obtained from the Australia Telescope Online Archive\footnote{http://atoa.atnf.csiro.au}) described in Trigilio et al. (2008). In summary, we analysed observations of CU Vir from three epochs: Epoch A (1999 May 29), Epoch B (1999 August 29) and Epoch C (2008 October 31). Data from Epochs A and B were collected by Trigilio et al. (2008), and the Epoch C observations were ours. Details of all observations are summarised in Table~\ref{table2}. \begin{table} \begin{center} \caption{Basic parameters and detections from the three datasets analysed. \textit{Q}: quiescent detections. \textit{P}: Number of peaks detected. The Stokes I quiescent ($Q_{20,13}$) and the Stokes V peak ($P_{20,13}$) flux densities (in mJy) for all peaks detected are also given. The errors in the peak flux densities are approximately 1\,mJy. } \label{table2} \begin{tabular}{lllll} \hline & Epoch A & Epoch B & Epoch C \\ \hline Date (UT) & 29/05/99 & 29/08/99 & 31/10/08 \\ Timespan (hrs) & 9.3 & 9 & 9 \\ ATCA config. & 6A & 6D & 6A \\ 20 cm $Q$ & Yes & Yes & Yes \\ 13 cm $Q$ & Yes & Yes & Yes \\ 20 cm $P$ & 2 & 2 & 2 \\ 13 cm $P$ & 1 & 1 & 2 \\ \hline $Q_{20}$ & $3.4(3)$ & $2.3(2)$ & $2.5(3)$ \\ $P_{20}$ & 5, 13 & 3, 6 & 11, 12 \\ $Q_{13}$ & $3.4(2)$ & $3.0(2)$ & $2.9(2)$\\ $P_{13}$ & 0, 14 & 0, 6 & 12, 8 \\ \hline \end{tabular} \end{center} \end{table} \section{Results} \begin{figure} \centering \includegraphics[height=8cm,angle=-90]{fig1.ps} \caption{Epoch C pulses and quiescence from CU Vir. From top to bottom: 13\,cm Stokes I (dotted) and V pulses, 20\,cm Stokes I (dotted) and V pulses, 13\,cm Stokes I off-pulse flux densities and 20\,cm Stokes I off-pulse flux densities. The pulse lightcurves are averaged in two-minute bins, and the off-pulse lightcurves in 50-minute bins. The off-pulse measurements have errors of 0.4\,mJy. } \end{figure} We observed two peaks of 100$\%$ right-circularly polarised emission from CU Vir during Epoch C at both 13 and 20\,cm, as well as time-variable quiescent emission. The pulse and quiescent lightcurves are shown in Figure 1. No significant linear polarisation was detected in the pulses, and the quiescent emission was found not to be significantly polarised. Our analysis method reproduces the results of Trigilio et al. (2008) for Epochs A and B. In Figure 2 we plot the Stokes V lightcurves for all three epochs aligned according to the rotation ephemeris of Pyper et al. (1998). The average quiescent flux densities are presented in Table~\ref{table2} along with the peak flux densities for all epochs. Our Epoch C observations clearly show two peaks at both 13 and 20\,cm, in contrast to the lone 13\,cm peaks in the Epoch A and B observations (see Figure 2). The Epoch C peak separations are smaller in the 13\,cm band (separation of $\sim$4\,hr) than in the 20\,cm band (separation of $\sim 5$\,hr), but the midpoints between the peaks in both bands occur at the same time. This implies that, if pulse arrival times at different frequencies need to be compared, the ``arrival-time'' of the pulse should be taken as the midpoints between the peaks. Figure~2 clearly shows that the rotational ephemeris of Pyper et al. (1998) is not able to align the observed pulses. In order to obtain an accurate ephemeris for CU Vir we obtained pulse times-of-arrival (TOAs), as described above, and used standard pulsar timing software \cite{hem+06} to fit simple periodicity models to these TOAs using uniform weighting. The resulting best-fit period was $44989.967(8)$\,s$=0.52071721(9)\times10^{-8}$\,days. The post-fit rms timing residual was 46\,s. No significant period derivative could be determined. Our measured period is 1.221(8) seconds slower than that of Pyper et al. (1998) which is consistent with that determined by Trigilio et al. (2008) using pulses separated by $\sim$1\,yr. \begin{figure} \centering \includegraphics[height=7cm,angle=-90]{fig2.ps} \caption{Illustration of the varying pulse phases with respect to the ephemeris of Pyper et al. (1998). The pulse profiles from Epochs A, B and C are shown in the bottom, middle and top panels respectively. The 13\,cm Stokes V pulses (dashed lines) are placed at +15\,mJy to differentiate them from the 20\,cm Stokes V pulses (solid lines).} \end{figure} \begin{table} \begin{center} \caption{List of TOAs with references derived from all lightcurves of CU Vir.} \label{table5} \begin{tabular}{llll} \hline TOA (MJD) & $\nu_{obs}$ & Epoch & Reference \\ & (GHz) & \\ \hline 50966.159(8) & 1.425 & June 98 & Trigilio et al. (2000) \\ 51327.535(3) & 1.380 & A & Trigilio et al. (2008) \\ 51419.189(3) & 1.380 & B & Trigilio et al. (2008) \\ 54770.007(3) & 1.380 & C & this paper \\ 54770.007(3) & 2.368 & C & this paper \\ \hline \end{tabular} \end{center} \end{table} \section{The magnetic field of CU Vir} Spectrally peculiar A and B stars (Ap/Bp stars), such as CU Vir, have long been known to be highly magnetic \cite{b47}, commonly with large dipolar components (e.g. Borra $\&$ Landstreet 1980). A leading hypothesis for the field origins is that of `frozen-in' fossil fields from early evolutionary stages \cite{dl09}. The frozen-in field paradigm, numerically simulated by Braithwaite $\&$ Nordlund (2006)\nocite{bn06}, involves an unstable protostellar magnetic field, concentrated in the core, that quickly stabilises over a few Alfv{\`e}n timescales, given by $\tau_{A}=\frac{R_{*}}{v_{A}}$, where $R_{*}$ is the stellar radius and $v_{A}$ is the Alfv{\`e}n speed. For Ap stars such as CU Vir, $\tau_{A}\sim10$\,yr. For most of the main-sequence life of the star, the field then slowly expands outwards through ohmic diffusion, creating a slowly-increasing atmospheric poloidal field component. Simultaneous observations of the CU Vir pulse at many radio frequencies could provide interesting insights into the magnetic field structure. ECM emission frequencies directly map to magnetic field strengths (see Equation 1), and the emission is tightly beamed at fixed angles to field lines determined by the kinematics of the radiating electrons \cite{md+82}. Our dual-frequency observations of the CU Vir pulse show a time difference of $\sim$30 minutes between correponding peaks. This translates to a difference in the angle of emission with respect to the magnetic axis of 14$^{\circ}$. Assuming ECM emission at the fundamental cyclotron frequency, the 13\,cm and 20\,cm bands correspond to emission from magnetic field regions of 850\,G and 500\,G respectively. If the pulse is emitted from field lines occupying a narrow range of magnetic co-latitudes \cite{ltb+06,tlu+08}, the tangents to the field lines at field strengths of 850\,G and 500\,G regions must also differ in angle to the magnetic axis by 14$^{\circ}$. Assuming a dipole field for CU Vir, this occurs for field lines that intersect the magnetic equator at radii of $\sim1.5R_{*}$. This is much closer than the 12-17$R_{*}$ predicted in the magnetospheric model of Leto et al. (2006) for field lines along which the radiating electrons propagate. Hence, we suggest that either the magnetic field is more curved than for a pure dipole in the polar regions, or that the Leto et al. (2006) model for the CU Vir magnetosphere does not adequately account for the CU Vir pulses. We are conducting wide-band radio studies of the CU Vir pulse to further probe the magnetic field structure in the pulse emission regions. \section{The anomalous radio periodicity} The offset between the radio pulse- and optical variability-periods of CU Vir can be interpreted in a variety of ways. Pyper et al. (1998) reported a reduction in the optical period of CU Vir of $\sim2$\,s that occurred in 1984. This was derived by fitting different periods to time-resolved spectrophotometric data gathered before and after 1984. A possible interpretation of the period discrepancy between the radio pulse data taken after 1998 and the latest ephemeris published by Pyper et al. (1998) is that the CU Vir rotation period has again decreased, sometime between the epoch of the last data analysed by Pyper et al. (1998) (May 1997) and the Epoch A radio observations. Trigilio et al. (2008) suggested that the loss of all the confined magnetospheric mass, as modelled by Havnes $\&$ Goertz (1984)\nocite{hg84}, at the epochs of the period changes could explain the apparent 0.2\,Myr spin-down timescale. Besides CU Vir, some Ap/Bp stars have been observed to have steadily reducing rotation periods \cite{toc+10}, but with spin-down timescales of $\sim$1\,Myr. Such spin-down timescales are predicted by simulations of steady angular momentum loss from Ap/Bp stars to the magnetic field and magnetically-confined wind, with episodes of confinement-breaking during which the magnetosphere is emptied \cite{uot09}. In contrast to the suggestion of Trigilio et al. (2008), the emptying episodes are associated with \textit{less} angular momentum loss, as the star can no longer lose angular momentum to its surroundings. These simulations cannot reproduce the apparent spin-down of CU Vir. We therefore explore other possibilities for the CU Vir period discrepancy. \subsection{A drifting pulse emission region} If we assume that the CU Vir rotation period has \emph{not} changed, then two possibilities exist: either the pulse periodicity we measure is real and represents a stable, persistent effect, or the periodicity is coincidental and future radio pulse measurements will not follow our fitted pulse period. We first consider the former case, and the latter case in \S\ref{sec:unstable}. Our measured period discrepancy could imply a non-fixed emission region that steadily drifts in azimuth about the rotation axis. The rate of drift corresponds to the period difference, $\sim$1.2\,s, per optical period. Thus, the drift period is approximately 53 years. This drift could be caused by a mechanism that is analogous to the differential rotation in the solar magnetosphere \cite{s89}. The emission region might also be coupled to the orbit of an object with a 53 year orbital period. This would cause the emission region to drift about the orbital axis. Whatever the relative orientation of the orbital angular momentum axis and CU Vir rotation axis may be, a systematic change with time in the pulse profile, both in shape and frequency characteristics, is expected because the emission region would be drifting relative to the magnetic field. Keplerian dynamics place such an object at a radius of approximately 20\,AU. Such a system could be directly analogous to the Jupiter-Io interaction, where the orbital phase of Io around Jupiter is strongly correlated with the occurrence of Jovian decametric emission \cite{b64}. \subsection{An unstable emission region}\label{sec:unstable} The discrepant radio pulse periodicity could be ascribed to coincidence, and could indicative of an unstable emission region. In this interpretation, future measurements of the radio lightcurve will not fit the currently measured pulse period. Pulse shape variability is also prevalent among the only other stable emitters of coherent radio pulses: radio pulsars. Indeed, single pulses from pulsars vary greatly in both structure and power from pulse to pulse. It has however been shown that pulsars have extremely stable characteristic pulse profiles, formed by averaging large numbers of individual pulses together \cite{mt+77}. A similar average pulse profile for CU Vir, attempted by Kellett et al. (2007)\nocite{kgb+07}, will be useful in ascertaining the emission region structure and stability, as well as aiding in pulse timing. \section{Conclusions and future work} For more than a decade, CU Vir has been known to be unique among main-sequence stars in producing strong, periodic peaks of coherent radio emission. Our observations reveal, for the first time, twin peaks in the CU Vir pulse profile at both 13 and 20\,cm. While the 13\,cm peak separation is one hour less than the 20\,cm peak separation, the midpoints of the peaks occur simultaneously in both bands. We show that the arrival-time difference between the 13\,cm and 20\,cm peaks could indicate that the field structure is more complex than a pure dipole. We demonstrate that a characteristic pulse arrival time can be determined from the midpoint of the peaks in the profile. Using four arrival times derived with this technique from archival data as well as our own observations, we find that the radio pulses fit a periodicity that is 1.221\,s slower than the most recently published optical rotation period. This confirms, over a 10 year period, the initial trend evinced by Trigilio et al. (2008) using pulses separated by one year. We suggest that, in contrast to the explanations of Trigilio et al. (2008) for the period discrepancy, the anomalous radio periodicity could be caused by a drifting or unstable emission region. Targeted observations can reveal the cause of the period discrepancy. We plan a simultaneous measurement of a radio pulse and the optical lightcurve of CU Vir to determine whether a period change has indeed occurred. If a mass-loss event is the cause of the period change, a detached mass shell could be directly detected and its shape measured using infrared or optical interferometry. Thermal emission from electron-hydrogen atom collisions could be expected from a cooling mass shell \cite{tt+06}. Hydrogen recombination lines could also be a significant emission component, particularly at optical and near-infrared wavelengths. Furthermore, if a mass-loss event was associated with the 1984 optical period reduction, a much bigger shell might also be visible. If the pulse emission region is coupled to the orbit of a companion, sensitive radio very long baseline interferometry could detect radio emission from plasma flows between the companion and CU Vir, as well as possible reflex motion of CU Vir. Conventional planet detection techniques \cite{j+09}, such as time-resolved optical photometry and radial velocity measurements, are not applicable to CU Vir given its large degree of intrinsic variability. However, long-period binarity could be indicated by variations in the timing of the extrema of the optical lightcurve caused by gravitational effects in the binary. Despite the many remarkable properties of CU Vir, including its close proximity to the Earth and its fast rotation rate, it is unlikely that it will remain unique as a source of radio pulses. Future large-area continuum surveys by next-generation radio telescopes, such as the Australian Square Kilometre Array Pathfinder, the Karoo Array Telescope and eventually the Square Kilometre Array will potentially find many similar objects, leading to further insights into this fascinating class of radio transient. \section*{Acknowledgements} We thank the referee, Ian Stevens, for many valuable suggestions. We are also grateful for the advice and insight of C. Trigilio on this letter. The Australia Telescope Compact Array is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. GH is supported by an Australian Research Council QEII Fellowship (project \#DP0878388).
{ "redpajama_set_name": "RedPajamaArXiv" }
2,184
// Boost.Geometry Index // // Abs of difference // // Copyright (c) 2011-2013 Adam Wulkiewicz, Lodz, Poland. // // Use, modification and distribution is subject to the Boost Software License, // Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at // http://www.boost.org/LICENSE_1_0.txt) #ifndef BOOST_GEOMETRY_INDEX_DETAIL_ALGORITHMS_DIFF_ABS_HPP #define BOOST_GEOMETRY_INDEX_DETAIL_ALGORITHMS_DIFF_ABS_HPP namespace boost { namespace geometry { namespace index { namespace detail { template <typename T> inline T diff_abs_dispatch(T const& v1, T const& v2, boost::mpl::bool_<true> const& /*is_integral*/) { return v1 < v2 ? v2 - v1 : v1 - v2; } template <typename T> inline T diff_abs_dispatch(T const& v1, T const& v2, boost::mpl::bool_<false> const& /*is_integral*/) { return ::fabs(v1 - v2); } template <typename T> inline T diff_abs(T const& v1, T const& v2) { typedef boost::mpl::bool_< boost::is_integral<T>::value > is_integral; return diff_abs_dispatch(v1, v2, is_integral()); } }}}} // namespace boost::geometry::index::detail #endif // BOOST_GEOMETRY_INDEX_DETAIL_ALGORITHMS_DIFF_ABS_HPP
{ "redpajama_set_name": "RedPajamaGithub" }
2,884
Wacky Worlds Creativity Studio es un videojuego educativo para Sega Mega Drive, producido y desarrollado por Sonic Team (subdivisión de los japoneses Sega), siendo una continuación de lo conocido y popular juego Art Alive!. Referencias Power Sonic Videojuegos para Sega Mega Drive Videojuegos de Sega Videojuegos educativos Videojuegos de 1994
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,455
import { NextFunction, Request, Response } from 'express' export function asyncMiddleware (fn) { return (req: Request, res: Response, next: NextFunction) => { Promise.resolve(fn(req, res, next)).catch(next) } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,286
Q: Web Service doesn't work when I try to update the android widget if the app is killed We are developing android widget for Xamarin.Forms application. The widget updates and gets data from the Web Service when the app is in Background, but stops working when the app is killed/closed. I have followed this article for developing this widget - Xamarin: Android Widget with timer, stops when app killed I want to Update the widget when the user clicks on Refresh button. If I add hardcoded data for textboxes and click Refresh it updates the time but doesn't work if I assign web service result data for the textboxes. I have added internet permission in AndroidManifest.xml. Is there a way I can get the data from web service even when the app is closed? Or Probably I am missing some permission? AppWidget.cs - public static class WidgetConsts { public const string DebugTag = "com.myapp.WIDGET"; public const string ActionWakeup = "com.myapp.WIDGET_WAKEUP"; public const string ActionWidgetUpdate = "android.appwidget.action.APPWIDGET_UPDATE"; public const string ActionWidgetDisabled = "android.appwidget.action.APPWIDGET_DISABLED"; } [BroadcastReceiver] [IntentFilter(new string[] { WidgetConsts.ActionWakeup })] public class AlarmReceiver : BroadcastReceiver { public override void OnReceive(Context context, Intent intent) { if (intent.Action.Equals(WidgetConsts.ActionWakeup)) { Log.Debug(WidgetConsts.DebugTag, "Wakeup alarm called"); if (AppWidget.widgetTimer == null) { Log.Debug(WidgetConsts.DebugTag, "Widget updating does not run, enforcing update..."); AppWidget.UpdateAppWidget(context); } else { Log.Debug(WidgetConsts.DebugTag, "Widget updating runs, no action needed"); } } } } [BroadcastReceiver] [IntentFilter(new string[] { WidgetConsts.ActionWidgetUpdate})] [MetaData("android.appwidget.provider", Resource = "@xml/appwidget_provider")] public class AppWidget : AppWidgetProvider { public static System.Timers.Timer widgetTimer = null; public override void OnUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { RemoteViews views = BuildRemoteViews(context, appWidgetIds); (AppWidgetManager.GetInstance(Android.App.Application.Context)).UpdateAppWidget(new ComponentName(Android.App.Application.Context, Java.Lang.Class.FromType(typeof(AppWidget))), views); // appWidgetManager.UpdateAppWidget(appWidgetIds[0], views); // set timer for updating the widget views each 5 sec if (widgetTimer == null) { widgetTimer = new System.Timers.Timer(); widgetTimer.Interval = 5000; widgetTimer.Elapsed += OnTimedEvent; } widgetTimer.Enabled = true; // set alarm to wake up the app when killed, each 60 sec // needs a fresh BroadcastReceiver because AppWidgetProvider.OnReceive is // not virtual and overriden method in this class would not be called AlarmManager am = (AlarmManager)context.GetSystemService(Context.AlarmService); Intent ai = new Intent(context, typeof(AlarmReceiver)); ai.SetAction(WidgetConsts.ActionWakeup); PendingIntent pi = PendingIntent.GetBroadcast(context, 0, ai, PendingIntentFlags.UpdateCurrent); am.SetRepeating(AlarmType.ElapsedRealtime, 100, 1000 * 60, pi); } public override void OnEnabled(Context context) { AlarmManager am = (AlarmManager)context.GetSystemService(Context.AlarmService); Intent ai = new Intent(context, typeof(AlarmReceiver)); ai.SetAction(WidgetConsts.ActionWakeup); PendingIntent pi = PendingIntent.GetBroadcast(context, 0, ai, PendingIntentFlags.UpdateCurrent); am.SetRepeating(AlarmType.ElapsedRealtime, 100, 1000 * 60, pi); base.OnEnabled(context); } public override void OnDisabled(Context context) { Log.Debug(WidgetConsts.DebugTag, "Disabling the widget"); if (widgetTimer != null) { Log.Debug(WidgetConsts.DebugTag, "Stopping timer"); widgetTimer.Enabled = false; } else Log.Debug(WidgetConsts.DebugTag, "Timer is null"); base.OnDisabled(context); } private void OnTimedEvent(object sender, ElapsedEventArgs e) { Log.Debug(WidgetConsts.DebugTag, "Updating status..."); new Handler(Looper.MainLooper).Post(() => { //Run my code to periodically update the widget RemoteViews views = new RemoteViews(Android.App.Application.Context.PackageName, Resource.Layout.SnapVertWidget); AppWidgetManager manager = AppWidgetManager.GetInstance(Android.App.Application.Context); ComponentName thisWidget = new ComponentName(Android.App.Application.Context, Java.Lang.Class.FromType(typeof(AppWidget))); int[] appWidgetIds = manager.GetAppWidgetIds(thisWidget); (AppWidgetManager.GetInstance(Android.App.Application.Context)).UpdateAppWidget(new ComponentName(Android.App.Application.Context, Java.Lang.Class.FromType(typeof(AppWidget))), views); // manager.UpdateAppWidget(appWidgetIds[0], views); }); } static public void UpdateAppWidget(Context context) { Intent intent = new Intent(context, typeof(AppWidget)); intent.SetAction(WidgetConsts.ActionWidgetUpdate); int[] ids = AppWidgetManager.GetInstance(context).GetAppWidgetIds(new ComponentName(context, Java.Lang.Class.FromType(typeof(AppWidget)))); intent.PutExtra(AppWidgetManager.ExtraAppwidgetIds, ids); context.SendBroadcast(intent); } public RemoteViews BuildRemoteViews(Context context, int[] appWidgetIds) { xxx.Droid.Services.MyWidget myWidget = new xxx.Droid.Services.MyWidget(); var entry = myWidget.GetData(); // Build an update that holds the updated widget contents var updateViews = new RemoteViews(context.PackageName, Resource.Layout.SnapVertWidget); updateViews.SetTextViewText(Resource.Id.txtvwUpdate, Convert.ToString(DateTime.Now)); updateViews.SetTextViewText(Resource.Id.txtvwCityName, entry.Result.CityName); updateViews.SetTextViewText(Resource.Id.txtvwTemp, entry.Result.TempValue); //SetTextViewText(widgetView); RegisterClicks(context, appWidgetIds, updateViews); return updateViews; } private void RegisterClicks(Context context, int[] appWidgetIds, RemoteViews widgetView) { Intent intentUpdate = new Intent(context, typeof(AppWidget)); intentUpdate.SetAction(AppWidgetManager.ActionAppwidgetUpdate); //Update the current widget instance only, by creating an array that contains the widget's unique ID// int[] idArray = new int[] { appWidgetIds[0] }; intentUpdate.PutExtra(AppWidgetManager.ExtraAppwidgetIds, idArray); PendingIntent pendingUpdate = PendingIntent.GetBroadcast( context, appWidgetIds[0], intentUpdate, PendingIntentFlags.UpdateCurrent); widgetView.SetOnClickPendingIntent(Resource.Id.btnRefresh, pendingUpdate); Intent launchAppIntent = new Intent(context, typeof(MainActivity)); PendingIntent launchAppPendingIntent = PendingIntent.GetActivity(context, 0, launchAppIntent, PendingIntentFlags.UpdateCurrent); widgetView.SetOnClickPendingIntent(Resource.Id.pnlWeather, launchAppPendingIntent); } }
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,222
\section*{References\markboth {References}{References}}\list {[\arabic{enumi}]}{\settowidth\labelwidth{[#1]} \leftmargin\labelwidth \advance\leftmargin\labelsep \usecounter{enumi}} \def\hskip .11em plus .33em minus .07em{\hskip .11em plus .33em minus .07em} \sloppy \sfcode`\.=1000\relax} \let\endthebibliography=\endlist \def\ ^<\llap{$_\sim$}\ {\ ^<\llap{$_\sim$}\ } \def\ ^>\llap{$_\sim$}\ {\ ^>\llap{$_\sim$}\ } \def\sqrt 2{\sqrt 2} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def\gamma^{\mu}{\gamma^{\mu}} \def\gamma_{\mu}{\gamma_{\mu}} \def{1-\gamma_5\over 2}{{1-\gamma_5\over 2}} \def{1+\gamma_5\over 2}{{1+\gamma_5\over 2}} \def\sin^2\theta_W{\sin^2\theta_W} \def\alpha_{EM}{\alpha_{EM}} \defM_{\tilde{u} L}^2{M_{\tilde{u} L}^2} \defM_{\tilde{u} R}^2{M_{\tilde{u} R}^2} \defM_{\tilde{d} L}^2{M_{\tilde{d} L}^2} \defM_{\tilde{d} R}^2{M_{\tilde{d} R}^2} \defM_{z}^2{M_{z}^2} \def\cos 2\beta{\cos 2\beta} \defA_u{A_u} \defA_d{A_d} \def\cot \beta{\cot \beta} \def\v#1{v_#1} \def\tan\beta{\tan\beta} \def$e^+e^-${$e^+e^-$} \def$K^0$-$\bar{K^0}${$K^0$-$\bar{K^0}$} \def\omega_i{\omega_i} \def\chi_j{\chi_j} \defW_\mu{W_\mu} \defW_\nu{W_\nu} \def\m#1{{\tilde m}_#1} \def\mathcal{H}{m_H} \def\mw#1{{\tilde m}_{\omega #1}} \def\mx#1{{\tilde m}_{\chi^{0}_#1}} \def\mc#1{{\tilde m}_{\chi^{+}_#1}} \def{\tilde m}_{\omega i}{{\tilde m}_{\omega i}} \def{\tilde m}_{\chi^{0}_i}{{\tilde m}_{\chi^{0}_i}} \def{\tilde m}_{\chi^{+}_i}{{\tilde m}_{\chi^{+}_i}} \defM_z{M_z} \def\sin\theta_W{\sin\theta_W} \def\cos\theta_W{\cos\theta_W} \def\cos\beta{\cos\beta} \def\sin\beta{\sin\beta} \defr_{\omega i}{r_{\omega i}} \defr_{\chi j}{r_{\chi j}} \defr_f'{r_f'} \defK_{ik}{K_{ik}} \defF_{2}(q^2){F_{2}(q^2)} \def\varepsilon{\({\cal F}\)} \def{\f(\tilde c;\tilde s;\tilde W)+ \f(\tilde c;\tilde \mu;\tilde W)}{{\varepsilon(\tilde c;\tilde s;\tilde W)+ \varepsilon(\tilde c;\tilde \mu;\tilde W)}} \def\tan\theta_W{\tan\theta_W} \defsec^2\theta_W{sec^2\theta_W} \def{\tilde\chi^{+}_1}{{\tilde\chi^{+}_1}} \def{\tilde\chi^{+}_2}{{\tilde\chi^{+}_2}} \def{\tilde\theta}{{\tilde\theta}} \def{\tilde\phi}{{\tilde\phi}} \defM_z{M_z} \def\sin\theta_W{\sin\theta_W} \def\cos\theta_W{\cos\theta_W} \def\cos\beta{\cos\beta} \def\sin\beta{\sin\beta} \defr_{\omega i}{r_{\omega i}} \defr_{\chi j}{r_{\chi j}} \defr_f'{r_f'} \defK_{ik}{K_{ik}} \defF_{2}(q^2){F_{2}(q^2)} \def\varepsilon{\({\cal F}\)} \def{\f(\tilde c;\tilde s;\tilde W)+ \f(\tilde c;\tilde \mu;\tilde W)}{{\varepsilon(\tilde c;\tilde s;\tilde W)+ \varepsilon(\tilde c;\tilde \mu;\tilde W)}} \def\beta{$\cal{B}(\tau\to\mu \gamma)$~} \def\st{Stueckelberg~extension~} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def{\rm GeV}{{\rm GeV}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\nonumber \\}{\nonumber \\} \def\tan\theta_W{\tan\theta_W} \defsec^2\theta_W{sec^2\theta_W} \newcommand{\pxn}[1]{{\color{red}{#1}}} \newcommand{\zl}[1]{{\color{blue}{#1}}} \newcommand{\nc}[1]{{\color{green}{#1}}} \def\alpha{\alpha} \def\alpha_s{\alpha_s} \def\beta{\beta} \def\gamma{\gamma} \def\delta{\delta} \def\epsilon{\epsilon} \def\varepsilon{\varepsilon} \def\mathcal{A}{\mathcal{A}} \def\mathcal{B}{\mathcal{B}} \def\mathcal{C}{\mathcal{C}} \def\mathcal{D}{\mathcal{D}} \def\mathcal{E}{\mathcal{E}} \def\mathcal{F}{\mathcal{F}} \def\mathcal{G}{\mathcal{G}} \def\mathcal{H}{\mathcal{H}} \def\mathcal{I}{\mathcal{I}} \def\mathcal{J}{\mathcal{J}} \def\mathcal{K}{\mathcal{K}} \def\mathcal{L}{\mathcal{L}} \def\mathcal{M}{\mathcal{M}} \def\mathcal{N}{\mathcal{N}} \def\mathcal{O}{\mathcal{O}} \def\mathcal{P}{\mathcal{P}} \def\mathcal{Q}{\mathcal{Q}} \def\mathcal{R}{\mathcal{R}} \def\mathcal{S}{\mathcal{S}} \def\mathcal{T}{\mathcal{T}} \def\mathcal{U}{\mathcal{U}} \def\mathcal{V}{\mathcal{V}} \def\mathcal{W}{\mathcal{W}} \def\mathcal{X}{\mathcal{X}} \def\mathcal{Y}{\mathcal{Y}} \def\mathcal{Z}{\mathcal{Z}} \begin{titlepage} \begin{center} {\large {\bf 3.5 keV Galactic Emission Line as a Signal from the Hidden Sector }}\\ \vskip 0.5 true cm Ning Chen$^{a}$\footnote{Email: ustc0204.chenning@gmail.com}, Zuowei Liu$^{a,b}$\footnote{Email: zuoweiliu@tsinghua.edu.cn} and Pran Nath$^{c}$\footnote{Emal: nath@neu.edu} \vskip 0.5 true cm \end{center} \noindent {$^{a}$ Institute of Modern Physics and Center for High Energy Physics, Tsinghua University, Beijing, 100084, China}\\ {$^{b}$ Department of Physics, McGill University, 3600 Rue University, Montreal, Quebec, Canada H3A 2T8 }\\ {$^{c}$ Department of Physics, Northeastern University, Boston, Massachusetts 02115-5000, USA} \\ \vskip 0.5 true cm \centerline{\bf Abstract} An emission line with energy of $E\sim 3.5$ keV has been observed in galaxy clusters by two experiments. The emission line is consistent with the decay of a dark matter particle with a mass of $\sim 7$ keV. In this work we discuss the possibility that the dark particle responsible for the emission is a real scalar ($\rho$) which arises naturally in a $U(1)_X$ \st of MSSM. In the MSSM \st $\rho$ couples only to other scalars carrying a $U(1)_X$ quantum number. Under the assumption that there exists a vectorlike leptonic generation carrying both $SU(2)_L\times U(1)_Y$ and $U(1)_X$ quantum numbers, we compute the decay of the $\rho$ into two photons via a triangle loop involving scalars. The relic density of the $\rho$ arises via the decay $H^0\to h^0+ \rho$ at the loop level involving scalars, and via the annihilation processes of the vectorlike scalars into $\rho + h^0$. It is shown that the galactic data can be explained within a multicomponent dark matter model where the 7 keV dark matter is a subdominant component constituting only $(1-10)$\% of the matter relic density with the rest being supersymmetric dark matter such as the neutralino. Thus the direct detection experiments remain viable searches for WIMPs. The fact that the dark scalar $\rho$ with no interactions with the standard model particles arises from a \st of a hidden $U(1)_X$ implies that the 3.5 KeV galactic line emission is a signal from the hidden sector. \noindent {\scriptsize Keywords:{ 3.5 keV gamma line, Stueckelberg, MSSM extension, dark matter }\\ PACS numbers:} \medskip \end{titlepage} \section{Introduction \label{sec1}} Two experiments ~\cite{Bulbul:2014sua,Boyarsky:2014jta} have seen a 3.5 keV gamma line in the sky. A possible explanation is that it is coming from decay of a dark matter particle. The experiment gives \begin{equation} \Gamma^{-1}(\rho \to \gamma\gamma) =(4\times 10^{27}- 4\times 10^{28})s \ . \label{1} \end{equation} Several works already exist trying to explain the 3.5 keV line such as from decaying dark particle \cite{Babu:2014pxa} and emission from a dark atom~\cite{Frandsen:2014lfa,Cline:2014eaa}. Here we consider the possibility that the gamma emission arises from the decay of a real field $\rho$ that arises naturally in a $U(1)_X$ extension of the standard model gauge group \cite{Kors:2004dx,Kors:2004ri,Kors:2004iz,Kors:2005uz} (see also \cite{Cheung:2007ut,Feldman:2007nf,Feldman:2007wj,Feldman:2006wb, Liu:2011di,Feldman:2011ms,Feng:2012jn,Perez:2014gta,Santos:2014xka}). In the $U(1)_X$ \st of MSSM we assume that all the MSSM particles are neutral under $U(1)_X$ but that there exists extra matter in the form of a vectorlike multiplet which is charged under $U(1)_X$. As mentioned in such an extension there naturally exists a real scalar particle $\rho$ which couples only to complex scalar particles which are charged under $U(1)_X$. In this work we will assume that extra matter consists of vectorlike multiplets which transform under $SU(2)_L\times U(1)_Y$ as doublets and singlets. We assume that $\rho$ has a mass of $\sim 7$ keV and it decays to two photons via exchange of these charged scalar fields to produce the galactic 3.5 keV gamma line. Regarding relic density, we assume that the primordial relic density is inflated away and the current relic density arises from the decay of Higgs bosons, and also scalar annihilations. There are many processes that can contribute to the relic density. These are: $h^0\to \rho + \rho$, $H^0\to \rho + \rho$, $H^0\to \rho + h^0$, {$\tilde E_1 \bar{ \tilde E}_1\to \rho +\rho, \rho + h^0$} where $E_1$ is a charged lepton in the vectorlike multiplet etc. It turns out that the final states with $\rho \rho$ are suppressed by $m_\rho^4$ and are thus negligible. Further, the contribution of the Higgs boson decays dominate the contribution from the annihilation to the relic density. Within the above model it is found possible to fit the galactic data with the 7 keV scalar dark matter being a subdominant component making up as little as 1\%-10\% of the total dark matter density with the rest being supersymmetric dark matter such as the neutralino or the gravitino.\\ The outline of the rest of the paper is as follows: In \cref{sec2} we give a brief description of the \st of MSSM with inclusion of vectorlike multiplets which are charged both under $SU(2)_L\times U(1)_Y$ and under $U(1)_X$. In \cref{sec3} we give an analysis of lifetime of the $\rho$ decaying into two photons via triangle loops involving scalars charged under $U(1)_X$ and also charged under $ U(1)_{\rm em}$. In \cref{sec4} we give an analysis of the relic density of the $\rho$ which is produced via the decay of the Higgs bosons and via annihilation processes. Here we show that the relic density of the $\rho$ arising from Higgs decay is the dominant component and the annihilation processes are subdominant. In \cref{sec5} we give a numerical analysis where we fit the gamma data from the galactic clusters with $\rho$ as a subdominant component of dark matter. Conclusions are given in {\cref{sec6}} while further details of the analysis are given in the appendices. \section{The Model\label{sec2}} As mentioned in Section\ \ref{sec1} we consider an extra vectorlike leptonic generation $V$ consisting of {$L, E^c, N^c, L^{\prime c}, E', N'$} with $SU(3)_C\times SU(2)_L\times U(1)_Y\times U(1)_X$ quantum numbers as follows \begin{align}\label{2} &L= \left(\begin{matrix} N_{L}\cr ~{E}_{L} \end{matrix} \right) (1,2,- \frac{1}{2},1), && E^c_{L}(1,1,1, -1), & & N^c_{L}(1,1,0,-1)\ ,\\ &L^{{\prime c}} =\left(\begin{matrix} E_{ L}^{{\prime c}} \cr N_L^{{\prime c}}\end{matrix}\right)(1,2,\frac{1}{2}, -1), && E_L' (1,1,-1,1), && N_L'(1,1,0,1)\ , \label{3} \end{align} where the first two numbers refer to the representation. The scalar fields can be written as above with a tilde on them except that $\tilde E^c_L= \tilde E_R^*, \tilde E^{{\prime c}}= \tilde E_R^{{\prime *}}$. We also here note that the Higgs doublets in the MSSM have the quantum numbers \begin{gather} H_d=(\mathbf{1},\mathbf{2},-\tfrac{1}{2},0)\,,\qquad H_u=(\mathbf{1},\mathbf{2},+\tfrac{1}{2},0)\, \ . \end{gather} {Here we also note that all of the MSSM particles carry no $U(1)_X$ charge.} The superpotential for the vectorlike leptonic supermultiplets is given by { \begin{multline} W = yLH_{d}E_L^{c}+y'L^{{\prime c}}H_{u}E_L'+M_{L}LL^{{\prime c}}+M_{E}E_L'E_L^{c}+ M_N N_L'N_L^c \, , \label{yuk} \end{multline} } where $M_{L}$, $M_{E}$ and $M_N$ are the vectorlike masses. After spontaneous breaking the two Higgs doublets of $SU(2)_L$ develop VEVs so that: \begin{eqnarray} H_d = \left(\begin{matrix} H_d^0\cr H_d^-\end{matrix}\right) = \left(\begin{matrix} \frac{1}{\sqrt 2}(v_d+\phi_1)\cr H_d^-\end{matrix}\right)\,,\qquad H_u= \left(\begin{matrix}H_u^+\cr H_u^0\end{matrix}\right) =\left(\begin{matrix}H_u^+ \cr \frac{1}{\sqrt 2}(v_u+\phi_2) \end{matrix}\right)\,, \end{eqnarray} where $v_d$ and $v_u$ are the VEVs of $H^0_d$ and $H^0_u$. Now $\rho$ couples only to the scalars so we focus on the scalar fields which are charged under $U(1)_X$. In this case we have a $4\times 4$ mass {squared} matrix and in the basis {$(\tilde E_L, \tilde E_R, \tilde E_L', \tilde E_R')$} it is given by \footnote{{ In the analysis below the $2\times 2$ off -diagonal matrices in \cref{slep1} will be neglected. They are displayed in \cref{slep1} for completeness. }} \begin{equation} \frac{1}{\sqrt{2}} \left(\begin{array}{cc|cc} \multicolumn{2}{c|}{\multirow{2}*{$\sqrt{2}(M_{\tilde{E}}^2)_{2\times 2}$}} & y' v_u M_L + y v_d M_E & 0\\ && 0 & y' v_u M_E + y v_d M_L \\ \hline y' v_u M_L + y v_d M_E & 0 & \multicolumn{2}{c}{\multirow{2}*{$\sqrt{2} (M_{\tilde{E}'}^2)_{2\times 2}$}}\\ 0 & y' v_u M_E + y v_d M_L &&\\ \end{array}\right)_{4\times 4}\,. \label{slep1} \end{equation} Here $(M^2_{\tilde E})_{2\times 2}$ is given by \begin{eqnarray} (M^2_{\tilde E})_{2\times 2}=\left(\begin{array}{cc} M_{1}^{2}+\tfrac{1}{2}y^2 v^2_d +M_{L}^{2} +\frac{(g_1^2-g_2^2)}{8} (v_d^2 - v_u^2) & \frac{1}{\sqrt 2} y (A_E v_d - \mu v_u)\\ \frac{1}{\sqrt 2} y (A_{E} v_d - \mu v_u) & M_1^2 +\tfrac{1}{2}y^2 v^2_d+ M_E^2 - \frac{g_1^2}{4} (v_d^2 - v_u^2) \end{array}\right)\,, \label{slep2} \end{eqnarray} where $M_1$ is the soft mass while $M_L$ and $M_E$ are vectorlike masses. We label the eigenvalues as $m^2_{\tilde E_1}$ and $m^2_{\tilde E_2}$ and the corresponding eigenstates by $\tilde E_1$ and $\tilde E_2$ which are related to $\tilde E_L$ and $\tilde E_R$ by \begin{gather} \left(\begin{matrix} \tilde E_L \\ \tilde E_R\end{matrix}\right) = \left(\begin{matrix} \cos\xi & \sin\xi \\ -\sin\xi & \cos\xi \end{matrix} \right) \left(\begin{matrix} \tilde E_1 \\ \tilde E_2\end{matrix}\right)\ . \end{gather} Similarly $(M^2_{\tilde E'})_{2\times 2}$ is given by \begin{eqnarray} (M^2_{\tilde E'})_{2\times 2} =\left(\begin{array}{cc} M_{2}^{2}+\tfrac{1}{2}y'^2 v^2_u+M_{L}^{2} -\frac{(g_1^2-g_2^2)}{8} (v_d^2 - v_u^2) & \frac{1}{\sqrt 2} y' (A_{E'} v_u - \mu v_d)\\ \frac{1}{\sqrt 2} y' (A_{E'} v_u - \mu v_d) & M_2^2 +\tfrac{1}{2}y'^2 v^2_u+ M_E^2 +\frac{g_1^2}{4} (v_d^2 - v_u^2) \end{array}\right)\, \label{slep3} \end{eqnarray} where $M_2$ is the soft mass. We label the eigenvalues of this mass squared matrix by $m^2_{\tilde E'_1}, m^2_{\tilde E'_2}$ with the corresponding eigenstates as $\tilde E_1'$ and $\tilde E_2'$. They are related to $\tilde E_L'$ and $\tilde E_R'$ by \begin{gather} \left(\begin{matrix} \tilde E'_L \\ \tilde E'_R\end{matrix}\right) = \left(\begin{matrix} \cos\xi' & \sin\xi' \\ -\sin\xi' & \cos\xi' \end{matrix} \right) \left(\begin{matrix} \tilde E_1' \\ \tilde E_2'\end{matrix}\right)\ . \end{gather} Since the diagonalization of the $4\times 4$ scalar mass squared matrix is intractable, we consider the case where the product of the fermion masses and the vectorlike masses are much smaller than the soft mass squares. In this case the mass squared matrix of \cref{slep1} takes on a block diagonal form where the $2\times 2$ matrix in the upper left hand corner is the mass squared matrix for the normal leptons in the vectorlike multiplet and the $2\times 2$ matrix in the lower right hand corner is the mass squared matrix for the mirror leptons in the vector like multiplet. \\ { In the analysis we have a $U(1)_X$ extension of MSSM and we assume that $U(1)_X$ gauge boson acquires mass through a Stueckelberg mechanism. This mechanism works only for $U(1)$ extensions. To achieve the extension we need to include a vector superfield $C$ and scalar supefields $S$ and $\bar S$ and assume a Lagrangian of the type \begin{eqnarray} {\cal L}_{\rm St} = \int d\theta^2 d\bar{\theta}^2\, \left[ M_C C + S +\bar S \right]^2\ , \label{mass} \end{eqnarray} The Lagrangian is invariant under the $U(1)_X$ gauge transformations \begin{eqnarray} \label{stgauge} \delta_X C = \Lambda_X + \bar\Lambda_X\ , \quad \delta_X S = - M_C \Lambda_X\ . \end{eqnarray} The vector superfield $C$ in Wess-Zumino gauge is given by \begin{eqnarray} C= -\theta\sigma^{\mu}\bar \theta C_{\mu} +i\theta\theta \bar\theta \bar \lambda_C -i\bar\theta\bar\theta \theta \lambda_C +\frac{1}{2} \theta \theta\bar\theta\bar\theta D_C \ , \end{eqnarray} while $S$ is given by \begin{eqnarray} \label{supS} S &=&\frac{1}{2}(\rho +ia ) + \theta\chi + i \theta\sigma^{\mu}\bar\theta \frac{1}{2}(\partial_{\mu} \rho +i \partial_{\mu} a) \\ && +~ \theta\theta F +\frac{i}{2} \theta \theta \bar\theta \bar\sigma^{\mu} \partial_{\mu} \chi +\frac{1}{8}\theta\theta\bar\theta\bar\theta (\Box \rho+i\Box a) \ . \nonumber \end{eqnarray} The complex scalar component of $S$ contains the axionic pseudo-scalar $a$ and a real scalar field $\rho$. ${\cal L}_{\rm St}$ in component notation takes the form \begin{eqnarray} \label{stueck} {\cal L}_{\rm St} &=& - \frac{1}{2}(MC_{\mu} +\partial_{\mu} a)^2 - \frac{1}{2} (\partial_\mu \rho)^2 - i \chi \sigma^{\mu} \partial_{\mu}\bar {\chi} + 2|F|^2 \nonumber\\ && \hspace{0cm} +M\rho D_C +M\bar {\chi} \bar \lambda_C + M\chi \lambda_C \ . \label{Lst} \end{eqnarray} For the gauge field we add the kinetic terms \begin{eqnarray} {\cal L}_{\rm gkin} = -\frac{1}{4} C_{\mu\nu} C^{\mu\nu} - i \lambda_C \sigma^{\mu}\partial_{\mu} \bar\lambda_C +\frac{1}{2} D_C^2 \label{Lkin} \end{eqnarray} where $C_{\mu\nu}=\partial_\mu C_\nu - \partial_\nu C_\mu$. For the matter fields (i.e., hidden sector matter) chiral superfields with components $(f_i,z_i,F_i)$ are introduced, which are defined similar to $S$. Their Lagrangian is standard and we do not display it here. It is the real scalar field $\rho$ which is the focus of our study here. From Eq.(\ref{Lst}) and Eq.(\ref{Lkin}) it is seen that the mass of $C_\mu$ which is identified in the unitary gauge as $Z'$ is the same as the mass of $\rho$ which can be gotten by elimination of the auxiliary field $D_C$. \\ } \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\columnwidth]{DM_decay} \caption{Triangle loop diagram for the decay process $\rho \to \gamma\gamma$ via exchange of vectorlike scalars (arising from \cref{2} and \cref{3}) in the loop which are charged under $U(1)_X$ and under $U(1)_{\rm em}$. } \label{fig:feynlife} \end{center} \end{figure} \section{$\rho$ lifetime\label{sec3}} We use the interactions of \cref{sec7} to compute the $\rho$ lifetime. The decay width of $\rho$ through scalar loops is given by (see, e.g., \cite{Feng:2013mea}) \begin{equation} \Gamma(\rho \to \gamma\gamma) = \frac{\alpha^2 m_{\rho}^3}{1024 \pi^3} \left[ \sum_i\frac{g_{\rho S_iS_i}}{m_{S_i}^2} N_{C,S} q_{S_i}^2 A_0(\tau_{s_i})\right]^2 \ , \label{width} \end{equation} Here the explicit form of $g_{\rho S_iS_i}$, where $S_i$ are in the mass diagonal basis, is given in {\cref{sec7}.} In \cref{width} $N_{C,S}$ are the (color, spin) multiplicities and $q_{S_i}$ is the electric charge for the field $S_i$ under $U(1)_{\rm em}$. We note that $\rho$ does not couple with the particles in the standard model, i.e., with quarks and leptons, or with the Higgs or with $W^{\pm}$ so these particles cannot appear in the loop. Only the scalars charged under $U(1)_X$ and under $U(1)_{\rm em}$ can appear in the loop. In \cref{width} $\alpha= 1/137, m_{\rho}= 7 ~\text{keV}= 7\times 10^{-6}~{\rm GeV}$, where $\tau_s$ is defined by $\tau_s= \frac{4 m_S^2}{m_{\rho}^2}$. $A_0(\tau)$ that appears in Eq.(\ref{width}) is a loop function which is given by \begin{equation} A_0(\tau)= -\tau [ 1- \tau f(\tau)]\ , \end{equation} where $f(\tau)$ is defined by \begin{equation} f(\tau)= \left\{ \begin{aligned} & \Big( \arcsin \frac{1}{\sqrt \tau} \Big)^{2}\,,\qquad\tau\geq1\, \ , \\ & -\frac{1}{4}\Big[\ln\frac{\eta_{+}}{\eta_{-}}-i\pi\Big]^{2}\,,\qquad\tau<1\,, \end{aligned} \right. \end{equation} where $\eta_{\pm}\equiv (1 \pm \sqrt{1-\tau})$ and $\tau = 4 m^2 \big/ m_\rho^2$ for a particle running in the loop with mass $m$. For the case when $\tau \gg 1$ one has \begin{eqnarray} f(\tau)\to \frac{1}{\tau}(1+ \frac{1}{3\tau} + \cdots )\,, \end{eqnarray} and in this limit $A_0\to 1/3$. For our case $\tau\gg1$ so we can replace $A_0$ by $1/3$. Next using the result of {\cref{sec7}} we have \begin{equation} \sum_i\frac{g_{\rho S_iS_i}}{m_{S_i}^2} N_{C,S} q_{S_i}^2 A_0(\tau_{s_i}) =g_Xq_E^2Q_E \cos 2\xi \frac{m_{\rho}}{3m_{S}^2} +g_Xq_{E'}^2 Q_{E'} \cos 2\xi' \frac{m_{\rho}}{3m_{S'}^2} \ . \end{equation} Here we have set $A_0=1/3$, and $m_S^2$ and $m_{S'}^2$ are effective scalar mass squares defined by \begin{gather} m_S^2= \frac{m_{\tilde E_1}^2 m_{\tilde E_2}^2}{m_{\tilde E_2}^2-m_{\tilde E_1}^2}, ~~m_{S'}^2= \frac{m_{\tilde E_1'}^2 m_{\tilde E_2'}^2}{m_{\tilde E_2'}^2-m_{\tilde E_1'}^2}\ , \end{gather} where $m_{\tilde E_1}$ and $m_{\tilde E_2}$ are mass eigenvalues corresponding to the eigenstates $\tilde E_1$ and $\tilde E_2$ etc. The width formula now simples to \begin{equation} \Gamma(\rho \to \gamma\gamma) = \frac{\alpha^2 m_\rho^5 g_X^2}{9\times 1024 \pi^3} \left(\frac{ Q_E\cos 2\xi}{m_S^2} + \frac{Q_{E'}\cos 2\xi'}{m_{S'}^2}\right)^2\ , \end{equation} where we have used $q_E= q_{E'}= 1$. A numerical analysis of $\Gamma(\rho \to \gamma\gamma)$ along with the relic density constraint will be discussed in {\cref{sec5}.} \section{Relic density analysis\label{sec4}} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\columnwidth]{Hhrho} \caption{ Triangle loop diagram for the decay process $H^0 \to \rho + h^0$ via exchange of vectorlike scalars (arising from \cref{2} and \cref{3}) in the loop which are charged under $U(1)_X$. } \label{fig:feynRD1} \end{center} \end{figure} \subsection{The decay $H^0 \to h^0 + \rho$ } \label{sec:rddecay} The $H^0$ decays into $h^0+ \rho$ via triangle loops and there are $4^3=64$ triangle loops to consider. Here we neglect the mixing due to the off diagonal terms in Eq.\ (\ref{slep1}), and only consider 16 different triangle diagrams. {The reduction {from} 64 to 16 is due to the neglect of the off diagonal terms in Eq.\ (\ref{slep1}).} Thus in a compact notation we can write the interaction of $\rho$ with the scalars charged under $U(1)_X$ as \begin{equation} {\cal L}_{\text{st}}= m_\rho g_X Q_E \rho g_{\rho ij} \tilde E_i^* \tilde E_j + m_\rho g_X Q_{E'} \rho g_{\rho ij}' \tilde E_i^{{\prime *}} \tilde E_j' \ , ~~~~~i, j=1,2 \end{equation} where $g_{\rho ij}$ and $g_{\rho ij}'$ are the ``reduced'' couplings and are given by (along with other couplings to the Higgses) { \begin{eqnarray} g_{\rho 1 1} = - g_{\rho 2 2} = g_{h 1 2} = g_{h 2 1} = g_{H 1 2} = g_{H 2 1} = \cos 2\xi \ , \\ g_{\rho 1 2} = g_{\rho 2 1} = -g_{h 1 1} = g_{h 2 2} = -g_{H 1 1} = g_{H 2 2} = \sin2\xi \ , \\ g'_{\rho 1 1} = - g'_{\rho 2 2} = g'_{h 1 2} = g'_{h 2 1} = g'_{H 1 2} = g'_{H 2 1} = \cos 2\xi' \ , \\ g'_{\rho 1 2} = g'_{\rho 2 1} = -g'_{h 1 1} = g'_{h 2 2} = -g'_{H 1 1} = g'_{H 2 2} = \sin2\xi' \ . \end{eqnarray} } Let us consider the $g_{ij}$ couplings. Since each scalar propagator can be either $\tilde E_1$ or $\tilde E_2$, there are eight different triangle diagrams that contribute. For a specific diagram labeled by the three scalars in the loop as $(a,b,c)$, as shown in Fig.\ (\ref{fig:feynRD1}), the matrix element is given by \begin{equation} {\cal M}_{abc} = G_0 ~ g_{\rho ab} g_{h bc} g_{Hca} \, I_{abc} \end{equation} where \begin{equation} G_0 \equiv m_\rho g_X Q_E \left( {g_2 m_E \over 2 {M_W} \cos\beta} \right)^2 (-A_E \sin\alpha + \mu \cos\alpha) (A_E \cos\alpha + \mu \sin\alpha)\ , \end{equation} and \begin{equation} I_{abc} \equiv \int {d^4 k \over (2\pi)^4} \left[ {1 \over (k+p_2)^2-m_a^2} {1 \over k^2-m_b^2} {1 \over (k-p_3)^2-m_c^2} \right]\ . \end{equation} Using Feynman parameterization, we obtain the loop integral as \begin{equation} I_{abc} = {-i \over (4\pi)^2} {1\over B_2} \int_0^1 dx \left[ f\left({B_1 \over 2 B_2} + 1-x, A \right) -f\left({B_1 \over 2 B_2}, A \right) \right] \ , \end{equation} where \begin{eqnarray} B_0 &\equiv& x^2 m_\rho^2 + x(m_a^2 -m_b^2-m_\rho^2) + m_b^2 \ , \\ B_1 &\equiv& m_c^2 -m_b^2 - m^2_{h^0} -x(m^2_{H^0} - m_\rho^2 - m^2_{h^0}) \ , \\ B_2 &\equiv& m^2_{h^0} \ , \\ {A} &\equiv& {4 B_0 B_2 -B_1^2 \over 4 B_2^2} \ , \end{eqnarray} and the function $f(x, A)$ is defined as \begin{equation} f(x,A) \equiv {1 \over \sqrt{|A|}} \left\{ \begin{array}{cc} \arctan(x/\sqrt{A}); & A >0 \\ \ln\sqrt{x-\sqrt{-A} \over x+\sqrt{-A}}; & A <0 \ . \end{array} \right. \end{equation} The total matrix element is then given by \begin{equation} {\cal M} = G_0 \sum_{a,b,c} ( g_{\rho a b} ~ g_{hbc}~ g_{H ca} ~ I_{abc})\ . \end{equation} Summing over all possibilities, we have \begin{eqnarray} {\cal M} /G_0 =c^3(I_{112}-I_{221}) +cs^2(I_{111}-I_{121}+I_{122} -I_{211}+I_{212}-I_{222})\ , \end{eqnarray} where $c\equiv \cos(2\xi)$ and $s\equiv \sin(2\xi)$. The decay width of the process $H^0 \to h^0 + \rho$ can now be computed \begin{equation} \Gamma(H^0 \to h^0 + \rho) = {1 \over 16 \pi m_{H^0}} \left[1- \left({m_{h^0} \over m_{H^0}}\right)^2\right] \overline{|{\cal M + \cal M'}|^2} \ . \end{equation} where ${\cal M}'$ is the amplitude with $g_{ij}$ replaced by $g'_{ij}$, and $G_0$ replaced with $G_0'$ which is given by \begin{equation} G_0' \equiv m_\rho g_X Q_{E'} \left( {g_2 m_{E'} \over 2 {M_W} \sin\beta} \right)^2 (A_{E'} \sin\alpha + \mu \cos\alpha) (A_{E'} \cos\alpha - \mu \sin\alpha)\ . \end{equation} The relic density analysis is similar to that of \cite{Babu:2014pxa}. With the following variables \begin{equation} z \equiv {m_{H^0} \over T}, f_\rho \equiv {n_\rho \over T^3}, f_{H^0} \equiv {n_{H^0} \over T^3}, K \equiv {1.66 \sqrt{g_*} \over m_\text{Pl}}\ , \end{equation} {In the above $m_{H^0}$ is the mass of the heavy neutral Higgs, $n_{H^0}$ is its number density, $n_\rho$ is the number density of $\rho$, and $g_*$ is the entropy degrees of freedom. Further, } $T$ is the temperature of the thermal bath, $H=KT^2$ is the Hubble constant, $ m_\text{Pl} = 1.22 \times 10^{19}$ GeV is the Planck mass, we obtain {neglecting the back reaction}\footnote{{The neglect of the back reaction is justified since the number density of $\rho$ is small.} } \begin{equation} {df_\rho (z) \over dz} = {\langle \Gamma(H^0\to h^0+\rho) \rangle \over K {m}_{H^0}^2} z f_{H^0}(z)\ . \end{equation} The thermal average on the decay width is $\langle \Gamma\rangle = \Gamma {K_1(z)/ K_2(z)}$ where $K_{1,2}(z)$ are the modified Bessel functions, and $f_{H^0}(z) = z^2 K_2(z)/ (2 \pi^2)$ (assuming Maxwell-Boltzmann statistics), so we have \begin{equation} f_\rho(z) = {\Gamma(H^0\to h^0+\rho) \over K {m}_{H^0}^2} \int_{z'=z_0}^z dz' {K_1(z')} {z'^3 \over 2 \pi^2} \ , \end{equation} where $z_0 = {m}_{H^0}/T_\text{EW}$ with $T_\text{EW}$ being the electroweak phase transition temperature, and we neglected the temperature dependence of $K$. For the case where ${m}_{H^0}=500$ GeV and $T_\text{EW}=300$ GeV, the above integral gives an asymptotic value $\sim 0.2$ for $z >10$\ . Thus we obtain (for ${m}_{H^0}=500$ GeV) \begin{equation} f_\rho(z>10) \simeq {0.2 \times \Gamma(H^0\to h^0+\rho) \over K {m}_{H^0}^2} \ . \end{equation} The quantity $n_\rho/s$ is conserved after $H^0$ disappears in the plasma, where $s$ is the entropy density. The current dark matter number density is then given by \begin{equation} n_\rho^0 = (T_0)^3 {g_{*s}^0 \over g_{*s}^\text{freeze-out}} f_\rho(z=20) = (T_0)^3 {g_{*s}^0 \over g_{*s}^\text{freeze-out}} {0.2\times \Gamma(H^0\to h^0+\rho) \over K {m}_{H^0}^2}\ , \label{eq:rd} \end{equation} where $T_0=2.73$ K, $g_{*s}^0=3.91$ is the current degree of freedom, and $g_{*s}^\text{freeze-out}$ is the degree of freedom during freeze out so that $g_{*s}^\text{freeze-out} = g_{*s}(T\simeq m_{H^0}/20)$. The relic density due to $H^0$ decay is thus given by \begin{equation} \Omega_\rho h^2 = \frac{n_\rho^0 m_\rho}{\rho_c} h^2 \simeq {n_\rho^0 m_\rho \over 8 \times 10^{-47} ~\text{GeV}^{-4}}\ . \end{equation} \subsection{{$\tilde{E}_1+\bar{\tilde{E}}_1 \to \rho + {h^0}$}} The dark matter can also be generated via scalar annihilations. We first discuss $\tilde{E}$ scalar annihilation. Among the two possible final states $\rho + h^0$ and $\rho+ \rho$, the $\rho+ \rho$ final state is suppressed by a factor of $m_\rho^4$. Thus we compute the matrix element for {$\tilde{E}_1+ \bar{\tilde{E}}_1 \to \rho + {h^0}$} as shown in Fig.\ (\ref{fig:feynRD2}). Here we find \begin{equation} (v_\text{rel}\sigma) = {A \over s(s-m^2_{h^0})}\ , \end{equation} where \begin{equation} A\equiv \left[ m_\rho g_X Q_E {g_2 m_E \over 4 {M_W} \cos\beta} (-A_E \sin\alpha + \mu \cos\alpha) \sin(4\xi) \right]^2 {1\over 8\pi m_{\tilde{E}_1}^2 } \ . \end{equation} \begin{figure}[t] \begin{center} \includegraphics[width=0.4\columnwidth]{annihilations} \caption{Feynman {diagrams} for the process {$\tilde{E}_1 (\tilde{E}_1')+ \bar{\tilde{E}}_1 (\bar{\tilde{E}}_1') \to \rho + h^0$}. } \label{fig:feynRD2} \end{center} \end{figure} The DM number density due to the annihilation process is given via the following Boltzmann equation (neglecting the initial DM density) \begin{equation} \dot{n}_\rho + 3 H n_\rho = n_{\tilde{E}_1}^2 \langle v_{\text{rel}} \sigma \rangle\ . \end{equation} Defining the variables $z \equiv {m_{\tilde{E}_1} / T}, f_{\tilde{E}_1} \equiv {n_{\tilde{E}_1} / T^3}$ (see \cite{Babu:2014pxa}) we obtain \begin{equation} {d f_\rho \over d z} = { m_{\tilde{E}_1} f_{\tilde{E}_1}^2 \over K z^2 } \langle v_{\text{rel}} \sigma \rangle\ . \end{equation} The thermal average of the annihilation is given by \begin{equation} \langle v_{\text{rel}} \sigma \rangle = {1\over 16 T m_{\tilde{E}_1}^4 K_2^2(z)} \int_{4m_{\tilde{E}_1}^2}^{\infty} s\sqrt{s-4m_{\tilde{E}_1}^2} K_1(\sqrt{s}/T) (v_{\text{rel}} \sigma) ds \ . \end{equation} Thus we obtain \begin{align} f_\rho &= {A \over 64 \pi^4 m_{\tilde{E}_1}^4 K} \int_1^{20} dz\ z^3 \int_{4 m_{\tilde{E}_1}}^{\infty} ds\ {\sqrt{s-4m_{\tilde{E}_1}^2} \over s-m_h^2} K_1(\sqrt{s}z/m_{\tilde{E}_1}) \nonumber\\ &\equiv {A \over 64 \pi^4 m_{\tilde{E}_1}^4 K} F(m_{\tilde{E}_1})\ , \end{align} where we have used $f_{\tilde{E}_1} = z^2 K_2(z)/(2\pi^2)$ \footnote{For scalar mass higher than the electroweak phase transition temperature $T_\text{EW}=300$ GeV, the integral should start at $z={m_{\tilde{E}_1}/ T_\text{EW}}$. In this case using $z=1$ for the lower end of the integral overestimates the contribution due to scalar annihilations. However, as shown later, the scalar annihilation is subdominant to the $H^0$ decay process for the parameter space of interest and is negligible in any case for the parameter space of interest.}. The 2-D integral only depends on the scalar mass $m_{\tilde{E}_1}$, and we found that the relation $F(m_{\tilde{E}_1}) \simeq 0.115 m_{\tilde{E}_1}$ can nicely approximate the integral for a large mass range, $m_{\tilde{E}_1}\in(10^2,10^7)$ GeV. The relic density calculation is similar to the decay process as discussed in Section \ref{sec:rddecay}, once $f_\rho$ is known at ${\tilde E_1}$ freeze-out. Thus we obtain the relic density due to annihilation \begin{align} (\Omega_\rho h^2)_\text{ann} &= { m_\rho \over 8 \times 10^{-47} ~\text{GeV}^{-4}} (T^0)^3 {g_{*s}^0 \over g_{*s}^\text{freeze-out}} {A \over 64 \pi^4 m_{\tilde{E}_1}^4 K} 0.115 m_{\tilde{E}_1} \nonumber\\ &= 6 \times 10^{14} ~\text{GeV} {A \over m_{\tilde{E}_1}^3}\ . \end{align} Similarly, one can compute the $\rho$ relic density due to the $\tilde{E}_1^\prime$ annihilations. For the $\tilde{E}_1^\prime$ annihilations, the quantity ($A/m_{\tilde{E}_1}^3$) here has to be replaced by ($A^\prime / m_{\tilde{E}_1^\prime}^3$) where $A^\prime$ is given by \begin{equation} A^\prime \equiv \left[ m_\rho g_X Q_{E'} {g_2 m_{E'} \over 4 {M_W} \sin\beta } {(A_{E'} \cos\alpha - \mu \sin\alpha)} \sin(4\xi') \right]^2 {1\over 8\pi m_{\tilde{E}_1^\prime}^2 } \ . \end{equation} \section{Numerical Analysis\label{sec5}} To fit the experimental data \cref{1} we need to compute the lifetime and the relic density of the $\rho$. This is done by carrying out a scan in the parameter space where we take the ranges of the soft masses $M_1, M_2$, of the vectorlike masses $M_L, M_E$, of the trilinear couplings $A_E, A_{E'}$ and of the fermion masses $m_E, m_{E'}$ generated by Yukawa couplings in the following ranges \begin{gather} (M_1, M_2, M_L, M_E, A_E, A_{E'}, \mu) \in (10^2,10^5)~{\rm GeV}, \\ {m_E \in (100,246)~{\rm GeV}, m_{E'} \in (100,300)~{\rm GeV} \ .} \end{gather} { where $m_E$ and $m_{E'}$ are defined so that \begin{gather} m_E\equiv\frac{1}{\sqrt 2} y v_d, ~~ m_{E'} \equiv \frac{1}{\sqrt 2} y' v_u \ . \end{gather} } {Further we require that $y<2$ and $y'<2$, which put constraints on the $\tan\beta$ value such that $\pi/4<\beta<\arccos( m_E / (\sqrt{2} v))$. We also} take $g_X=1$, ${m}_{H^0}=500$ GeV, ${m}_{h^0}=125$ GeV, ${m}_\rho=7$ keV, and $\alpha = \beta - \pi/2$ (see e.g.,\cite{Grinstein:2013npa}). We investigate the possibility that the dark matter constituted by $\rho$ contributes only a fraction of the relic density for dark matter measured by WMAP which is $\Omega_{\rm WMAP} h^2 \simeq 0.11$~\cite{Komatsu:2010fb}. Thus a desirable range is $R\equiv \Omega_\rho/\Omega_\text{DM} \in (0.01,0.1)$. This would leave the other major component to be the neutralino for which the dark matter searches can be pursued in the direct and the indirect detection experiments. In the analysis of the relic density of $\rho$ we include all the allowed processes. As discussed in \cref{sec4} and in {\cref{sec8}} the following processes contribute to the relic density \begin{figure}[t] \begin{center} \includegraphics[width=0.6\columnwidth]{lifetime.pdf} \caption{\underline{Blue circles}: {90} models out of the $2\times 10^5$ random scans in the parameter space satisfying the constraints $R\equiv \Omega_\rho / \Omega_\text{DM} \in (0.01,0.1)$ and $\tau_\rho/R \in 4 \times (10^{27}, 10^{28})$ s. \underline{Red points}: generic model points in the parameter space. } \label{fig:scan} \end{center} \end{figure} \begin{gather} H^0\to h^0+\rho\ , \\ \tilde E_1+\bar{ \tilde E}_1 \to \rho + h^0\ ,\\ \tilde E_1^{\prime}+ \bar{\tilde E}_1^{\prime } \to \rho + h^0\ . \end{gather} In addition there are other processes which are highly suppressed such as decays and annihilations with 2$\rho$ final states. We have computed the processes listed above. Of these the one which produces the dominant component of the $\rho$ relic density is the process $H^0\to h^0 +\rho$ while the other processes are subdominant. Thus the annihilation processes involving {$\tilde{E}_1+\bar{\tilde{E}}_1$} and {$\tilde{E}_1^\prime+\bar{\tilde{E}}_1^{\prime }$} together contribute at most 1\% of the number density of $\rho$, in the parameter space of interest. An analysis of the $\rho$ lifetime vs $R$ is given in \cref{fig:scan}. Here one finds that there exist parameter choices which fit the data on the 3.5 keV emission line with $\rho$ being a subdominant component of the total relic density. Thus with $\rho$ constituting (1-10)\% of the relic density or even less it is possible to satisfy the constraint of \cref{1}. In \cref{tab:models} we exhibit a set of parameter points satisfying the relic density constraint discussed above and satisfying the constraint of \cref{1}. {It is interesting to ask what the impact is of this two component dark matter model with (1-10)\% of dark matter constituted of $\rho$ and the rest made up of neutralinos on the direct detection of dark matter. The main impact is on the comparison of theory with experiment. Thus one compares the theoretical value $r \sigma_{\chi_1^0 p}^{SI}$ where $r= \Omega_{\chi_1^0} h_0^2/ \Omega_{DM} h_0^2$ with experiment. For $r=0.9-0.99$, the theoretical predictions will be smaller by a factor of 0.9-0.99 which means that some of the parameter points that were eliminated by the current upper limits are still viable. So effectively inclusion of a second dark matter component increases the allowed {parameter} space of SUSY models and thus relaxes the dark matter constraints from the direct detection of dark matter on these models. } We note in passing that the Stueckelberg model will have a $Z'$ which if stable will contribute to the dark matter density like the $\rho$. { However, unlike the $\rho$ it cannot decay into two photons. Further, is has no couplings to the standard model particles. To circumvent this problem and allow $Z'$ to decay we can generate a small mixing between $U(1)_X$ and a} gauged $L_\mu-L_\tau$ (see, e.g., \cite{Feng:2012jn}) allowing the $Z'$ to have a tiny coupling to the muon and to the tau neutrinos which permits the decays $Z'\to \nu_\mu \bar \nu_{\mu}, \nu_\tau\bar \nu_\tau$. On the other hand $\rho$ cannot couple to fermions and thus such decays are not allowed for the $\rho$ and the $\rho$ can only have photonic decays. {We also note that after $Z'-Z''$ mixing where $Z''$ is the gauge boson of $L_\mu-L_\tau$ which also acquires a mass via the Stueckelberg mechanism, the new leptons can annihilate to the standard model particles via the $Z', Z''$ poles. Thus specifically we will have {annihilation} processes such as {$E'\bar E'\to Z''\to \nu_\mu \bar \nu_{\mu}, \nu_\tau\bar \nu_\tau, \mu\bar \mu, \tau\bar \tau$ which deplete the matter density of the new leptons by resonant annihilation if the mass of the $Z''$ is chosen to be in the vicinity of twice the mass of the new leptons. The analysis is similar to the one given in ~\cite{Feldman:2007wj}.} Further, we give soft masses to the $U(1)_X$ and $U(1)_{L_\mu- L_\tau}$ gauginos which are large enough so they can decay into the MSSM fields. Thus for $U(1)_X$ the coupling $\bar E' \lambda \tilde E'$ would decay $\lambda$ into $E'$ and $\tilde E'$ which in turn will annihilate to the MSSM particles as discussed above.}\\ \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline Model & $M_1$ & $M_2$ & $M_L$ & $M_E$ & $m_E$ & $m_{E'}$ & $A_E$ & $A_{E'}$ & $\mu$ & $\tan\beta$ \\\hline A& 4.21e3 & 2.55e3 & 3.05e2 & 2.02e2 & 1.73e2 & 1.59e2 & 4.32e4 & 1.06e2 & 6.06e4 & 1.47 \\\hline B& 6.24e3 & 3.53e3 & 2.43e3 & 1.72e2 & 1.98e2 & 1.57e2 & 6.44e4 & 9.81e3 & 7.73e4 & 1.00 \\\hline C& 3.17e4 & 5.07e3 & 8.43e2 & 3.72e3 & 1.03e2 & 2.79e2 & 1.01e4 & 8.88e4 & 7.07e2 & 1.61 \\\hline D& 6.21e3 & 4.65e3 & 1.48e3 & 4.24e3 & 2.19e2 & 2.33e2 & 3.59e3 & 8.54e4 & 1.15e4 & 1.06 \\\hline E& 4.88e4 & 2.65e3 & 1.05e3 & 1.41e3 & 2.06e2 & 1.41e2 & 1.15e2 & 4.62e2 & 6.45e4 & 1.17 \\\hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Model & ${m}_{\tilde{E}_1}$ & ${m}_{\tilde{E}_2}$ & ${m}_{\tilde{E}_1'}$ & ${m}_{\tilde{E}_2'}$ & {$\Omega_\rho^{H^0}/\Omega_\text{DM}$} & {$\Omega_\rho^{\tilde{E}_1}/\Omega_\text{DM}$} & {$\Omega_\rho^{\tilde{E}_1'}/\Omega_\text{DM}$} & $\tau_\rho/R$ (s) \\\hline A& 3.2e3 & 5.1e3 & 2.9e2 & 3.6e3 & 2.0e{-2} & 3.0e{-9} & 6.0e{-5} & 4.4e{27} \\\hline B& 6.2e3 & 6.8e3 & 2.1e3 & 5.1e3 & 1.2e{-2} & 3.6e{-6} & 4.8e{-5} & 6.1e{27} \\\hline C& 3.2e4 & 3.2e4 & 2.8e3 & 7.7e3 & 3.6e{-2} & 1.3e{-13} & 4.0e{-5} & 6.6e{27} \\\hline D& 6.4e3 & 7.5e3 & 3.6e3 & 7.1e3 & 1.4e{-2} & 9.5e{-9} & 2.0e{-5} & 1.5e{28} \\\hline E& 4.9e4 & 4.9e4 & 9.2e2 & 4.0e3 & 1.1e{-2} & 1.2e{-13} & 4.9e{-5} & 4.8e{27} \\\hline \end{tabular} \caption{Candidate models: A, B, C, D, and E. All the masses are in GeV. The quantities $\Omega_\rho^{H^0}$, $\Omega_\rho^{\tilde{E}_1}$, and $\Omega_\rho^{\tilde{E}_1'}$ are the contributions to the $\rho$ relic density due to the decay process $H^0 \to h^0 + \rho$, and the annihilation processes $\tilde{E}_1 +\bar{ \tilde{E}}_1 \to \rho + h^0 $ and $\tilde E'_1+ \bar{\tilde E}'_1 \to \rho + h^0$. $\Omega_\text{DM}$ is the total dark matter density which is taken to be $\Omega_\text{DM}h^2=0.11$ and $\tau_\rho$ is the dark matter lifetime. $R$ is the ratio {between} the {relic} density of the $\rho$ dark matter and the total dark matter. } \label{tab:models} \end{center} \end{table} \section{\label{sec6}Conclusion} In this work we have given an analysis of the 3.5 keV emission line emanating from galaxy clusters as seen by two experiments. A possible explanation of the monochromatic nature of the radiation is that it originates from the decay of a 7 keV particle. In this work we identify this particle as a scalar $\rho$ that appears in a supersymmetric $U(1)_X$ \st of models with the standard model gauge group. In such an extension, the $\rho$ couples only to scalar fields that carry a $U(1)_X$ quantum number. The proposed $U(1)_X$ extension contains vectorlike multiplets which are charged under $SU(2)_L\times U(1)_Y$ as well as under $U(1)_X$. Thus the scalars of the vectorlike multiplet couple to $\rho$ as well as to the Higgs field and to the photon. These vectorlike couplings allow the decay of the $\rho$ to two photons via triangle loops involving scalar particles. An important constraint on the lifetime of the $\rho$ arises from the fraction that $\rho$ contributes to the dark matter relic density. The relic density of the $\rho$ arises only after electroweak symmetry breaking. Thus below the electroweak symmetry breaking scale the CP even Higgses can decay to a $\rho$ and there are various decay channels such as $h^0\to \rho + \rho$, $H^0\to \rho + h^0$, $H^0\to \rho +\rho$. Additionally annihilation can contribute to the relic density such as via the process {$\tilde E_1 + \bar{\tilde E}_1\to h^0 +\rho$} and the process {$\tilde E_1^\prime + \bar{\tilde E}_1^{\prime } \to h^0 +\rho$}. However, the dominant process that contributes to the relic density turns out to be $H^0\to \rho+ h^0$. A simultaneous analysis of the relic density and of the $\rho$ lifetime is needed to fit the data. In the analysis presented in this work we are able to fit the data with $\rho$ as a subdominant component of dark matter. Thus we have a multicomponent dark matter model where the emission line arises from a 7 keV scalar particle while the rest is constituted of neutralino dark matter which can be detected in direct detection experiments such as XENON1T\cite{Aprile:2012zx}, SuperCDMS\cite{Cabrera:2005zz} and LUX~\cite{Akerib:2013tjd}. Finally we note that our mechanism for generating a 3.5 keV line as well as the implications of the model are very different from other models that have recently been proposed \cite{ Cline:2014eaa,Babu:2014pxa,Conlon:2014xsa,Robinson:2014bma,Lee:2014koa,Okada:2014zea,Modak:2014vva,Dudas:2014ixa,Queiroz:2014yna,Demidov:2014hka, Ko:2014xda, Nakayama:2014cza, Bomark:2014yja,Liew:2014gia,Allahverdi:2014dqa, Kolda:2014ppa,Bezrukov:2014nza,Cicoli:2014bfa, Baek:2014qwa,Choi:2014tva,Nakayama:2014ova,Frandsen:2014lfa,Kong:2014gea,Aisati:2014nda, Krall:2014dba,Abazajian:2014gza, Jaeckel:2014qea,Higaki:2014zua, Finkbeiner:2014sja,Ishida:2014dlp,Boyarsky:2014jta,Baek:2014poa,Nakayama:2014rra,Chakraborty:2014tma}. \\ \noindent {\em Acknowledgments}: This research was supported in part by the NSF Grant PHY-1314774, NSF of China (under Grants 11275101, 11135003), XSEDE-TG-PHY110015, and NERSC-DE-AC02-05CH1123.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,418
Bangkok Toys International Co. Ltd. has officially launched a new toy and clothing accessory line Sesame Street characters. Collect all of your favorite Sesame Street characters with new plush dolls, neck cushions, and many more! Grab your favorites from Big Bird to Elmo in shopping malls and select toy stores across the country!
{ "redpajama_set_name": "RedPajamaC4" }
8,652
package com.amazonaws.services.frauddetector.model; import java.io.Serializable; import javax.annotation.Generated; import com.amazonaws.AmazonWebServiceRequest; /** * * @see <a href="http://docs.aws.amazon.com/goto/WebAPI/frauddetector-2019-11-15/ListTagsForResource" target="_top">AWS * API Documentation</a> */ @Generated("com.amazonaws:aws-java-sdk-code-generator") public class ListTagsForResourceRequest extends com.amazonaws.AmazonWebServiceRequest implements Serializable, Cloneable { /** * <p> * The ARN that specifies the resource whose tags you want to list. * </p> */ private String resourceARN; /** * <p> * The next token from the previous results. * </p> */ private String nextToken; /** * <p> * The maximum number of objects to return for the request. * </p> */ private Integer maxResults; /** * <p> * The ARN that specifies the resource whose tags you want to list. * </p> * * @param resourceARN * The ARN that specifies the resource whose tags you want to list. */ public void setResourceARN(String resourceARN) { this.resourceARN = resourceARN; } /** * <p> * The ARN that specifies the resource whose tags you want to list. * </p> * * @return The ARN that specifies the resource whose tags you want to list. */ public String getResourceARN() { return this.resourceARN; } /** * <p> * The ARN that specifies the resource whose tags you want to list. * </p> * * @param resourceARN * The ARN that specifies the resource whose tags you want to list. * @return Returns a reference to this object so that method calls can be chained together. */ public ListTagsForResourceRequest withResourceARN(String resourceARN) { setResourceARN(resourceARN); return this; } /** * <p> * The next token from the previous results. * </p> * * @param nextToken * The next token from the previous results. */ public void setNextToken(String nextToken) { this.nextToken = nextToken; } /** * <p> * The next token from the previous results. * </p> * * @return The next token from the previous results. */ public String getNextToken() { return this.nextToken; } /** * <p> * The next token from the previous results. * </p> * * @param nextToken * The next token from the previous results. * @return Returns a reference to this object so that method calls can be chained together. */ public ListTagsForResourceRequest withNextToken(String nextToken) { setNextToken(nextToken); return this; } /** * <p> * The maximum number of objects to return for the request. * </p> * * @param maxResults * The maximum number of objects to return for the request. */ public void setMaxResults(Integer maxResults) { this.maxResults = maxResults; } /** * <p> * The maximum number of objects to return for the request. * </p> * * @return The maximum number of objects to return for the request. */ public Integer getMaxResults() { return this.maxResults; } /** * <p> * The maximum number of objects to return for the request. * </p> * * @param maxResults * The maximum number of objects to return for the request. * @return Returns a reference to this object so that method calls can be chained together. */ public ListTagsForResourceRequest withMaxResults(Integer maxResults) { setMaxResults(maxResults); return this; } /** * Returns a string representation of this object. This is useful for testing and debugging. Sensitive data will be * redacted from this string using a placeholder value. * * @return A string representation of this object. * * @see java.lang.Object#toString() */ @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append("{"); if (getResourceARN() != null) sb.append("ResourceARN: ").append(getResourceARN()).append(","); if (getNextToken() != null) sb.append("NextToken: ").append(getNextToken()).append(","); if (getMaxResults() != null) sb.append("MaxResults: ").append(getMaxResults()); sb.append("}"); return sb.toString(); } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (obj instanceof ListTagsForResourceRequest == false) return false; ListTagsForResourceRequest other = (ListTagsForResourceRequest) obj; if (other.getResourceARN() == null ^ this.getResourceARN() == null) return false; if (other.getResourceARN() != null && other.getResourceARN().equals(this.getResourceARN()) == false) return false; if (other.getNextToken() == null ^ this.getNextToken() == null) return false; if (other.getNextToken() != null && other.getNextToken().equals(this.getNextToken()) == false) return false; if (other.getMaxResults() == null ^ this.getMaxResults() == null) return false; if (other.getMaxResults() != null && other.getMaxResults().equals(this.getMaxResults()) == false) return false; return true; } @Override public int hashCode() { final int prime = 31; int hashCode = 1; hashCode = prime * hashCode + ((getResourceARN() == null) ? 0 : getResourceARN().hashCode()); hashCode = prime * hashCode + ((getNextToken() == null) ? 0 : getNextToken().hashCode()); hashCode = prime * hashCode + ((getMaxResults() == null) ? 0 : getMaxResults().hashCode()); return hashCode; } @Override public ListTagsForResourceRequest clone() { return (ListTagsForResourceRequest) super.clone(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,747
Panel to probe Pune students' illegal booze party By IANS-CT / August 3, 2010 Mumbai, Aug 2 (IANS) The Symbiosis Institute of Management Studies has constituted a five-member inquiry committee to look into an illegal alcohol party that involved 489 students (254 boys and 235 girls), 80 of whom were detained by police, an official said Monday. 'We have constituted a team of five members including a lady member and a doctor to investigate the matter. They will be meeting for two days and will submit a report after that,' said the institute's Principal Director Vidya Yeravadekar. Pune rural police had raided the party at a farmhouse in Theur on the Pune-Sholapur road, around 25 km from their hostel in Pune. 'We received a complaint about loud music, following which we raided the farmhouse. The students neither had permission to play loud music nor did they have permission to consume alcohol,' said Ravindra Singh Pardesi, deputy superintendent of police, Pune (rural). 'We undertook a medical test of these students and found 80 of them had consumed alcohol. We released the others then and there itself,' Pardesi added. The police also seized liquor worth Rs.17,000 from the farmhouse. Yeravadekar said that various groups of students had asked permission for a 'late out' Sunday. 'It being a Sunday, they were granted permission to stay out of the hostel till 12.30 a.m. When they did not return by 1 a.m., the rector informed the director of the institute, who also stays in the same premises,' she said. 'By the time we made inquires and reached the spot, police had already taken over,' she added. Mercedes Benz India logs record July sales Delhi hikes freedom fighters' pension
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,852
Karlskrona is de hoofdstad van de gemeente Karlskrona in het landschap Blekinge en de provincie Blekinge län in Zweden. De plaats is ook hoofdstad van de provincie Blekinge län. De plaats heeft 32.606 inwoners (2005) en een oppervlakte van 2136 hectare. De plaats staat bekend als Zwedens enige barokstad. De vlootbasis van Karlskrona staat sinds 1998 op de werelderfgoedlijst van UNESCO. De stad wordt in hoofdstuk VIII van Nils Holgersson beschreven. Geschiedenis De stad werd in 1680 gesticht om als basis voor de Zweedse marine te fungeren. Zweden was de dominante militaire macht in de Oostzee geworden. Na de Deens-Zweedse Oorlog had Zweden in het Verdrag van Brömsebro de zuidelijke provincies Jämtland, Härjedalen, Idre en Särna en de Baltische eilanden Gotland en Ösel toegewezen gekregen. Met de gebiedsuitbreiding naar het zuiden wilde de marine profiteren van het zachtere klimaat. De Zweedse vloot in Stockholm lag in de winter langdurig vast in het ijs en in Karlskrona was dit minder het geval. De naam van de stad betekent Karl's kroon ter ere van koning Karel XI van Zweden. De stad groeide snel en in 1750 telde het al zo'n 10.000 inwoners. Het was toen een van de grootste steden van het land. De Vlootbasis Karlskrona werd tegelijk met de stad aangelegd. De vloot had grote verliezen geleden waardoor de bouw van nieuwe schepen noodzakelijk was. In 1711 was de basis met scheepswerven de grootste industriële werkgever in Zweden met 1.100 werknemers. Het oudste dok, het Polhemdok, werd uit de rots gehakt en is nog steeds in gebruik. Het kreeg zijn naam van Christoffel Polhem, een beroemde Zweedse wetenschapper. Voor de bescherming van de haven werd langs een belangrijke vaarweg Kasteel Drottningskär gebouwd. Op 27 oktober 1981 liep de Whiskey-klasse Sovjet-onderzeeër S-363, U137 door de Zweden genoemd, aan de grond in de archipel op een paar kilometer afstand van Karlskrona. Na tien dagen kwam de onderzeeboot weer los. Het incident leidde tot politieke spanningen met de Sovjet-Unie. Verkeer en vervoer Bij de plaats lopen de E22, Riksväg 27, Riksväg 28 en Länsväg 122. De stad heeft een station op de spoorlijnen Kristianstad - Karlskrona en Göteborg - Kalmar / Karlskrona. Geboren Harry Rosenswärd (1882-1955), zeiler Catharina Palmér (1963), componiste, organiste, pianiste en violiste Karl Petter Løken (1966), voetballer Tommy Werner (1966), zwemmer Cecilia Sandell (1968), voetbalster Julia Carlsson (1975), voetbalster Henrik Rydström (1976), voetballer Mikael Antonsson (1981), voetballer Amanda Kurtović (1991), handbalster Zoe Aggeliki (1994), model en actrice Partnersteden Hillerød Horten Loviisa Ólafsfjörður Klaipėda (sinds 1989) Gdynia (sinds 1990) Baltiejsk (sinds 1995) Rostock (sinds 2000) Externe link website van de stad Hoofdstad van een Zweedse provincie Plaats in Blekinge län
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,168
Cimstone Smyrna is a warm grey 'marbled' quartz worktop with light cream 'cat's paw' faux veining. A smart way to get the look of natural stone with the benefits of man-made composite. Decent value for this type of finish. WANT the look of marble without the fuss of such a "soft" natural stone worktop? Here's one solution. Spanish engineered-stone maestros Compac have reproduced the subtle grey veining of classic Italian marble in its Carrara white quartz. Because it's vacuum-formed into high-density slabs, this material has superior stain and scratch resistance – so performs perfectly in kitchen conditions. Unlike metamorphic marble – not to be confused with tough igneous granite worktops – Compac quartz is unaffected by household acids like lemon juice. Here it's been used in multiple thicknesses – including a kidney-shaped quartz breakfast bar – with black acrylic Parapan® doors to stunning effect. Read how to buy this marble-ous product here. Or see our Inspiration and Indie Designer pages.
{ "redpajama_set_name": "RedPajamaC4" }
624
package com.derek.imagetest.imagetest; import android.content.Context; import android.graphics.Bitmap; import android.util.Log; import com.squareup.picasso.Transformation; public class CropTransformation implements Transformation { private boolean isPercentage; private int width, height, x, y; private String id; public CropTransformation(Context context, String id, int width, int height, int x, int y) { super(); this.id = id; this.width = width; this.height = height; this.x = x; this.y = y; } public CropTransformation(Context context, String id, int width, int height, int x, int y, boolean isPercentage) { this(context, id, width, height, x, y); this.isPercentage = isPercentage; } @Override public Bitmap transform(Bitmap bitmap) { int nX, nY; if(isPercentage) { nX = Math.max(0, Math.min(bitmap.getWidth() - 1, bitmap.getWidth() * x / 100)); nY = Math.max(0, Math.min(bitmap.getHeight() - 1, bitmap.getHeight() * y / 100)); }else{ nX = Math.max(0, Math.min(bitmap.getWidth() - 1, x)); nY = Math.max(0, Math.min(bitmap.getHeight() - 1, y)); } int nWidth = Math.max(1, Math.min(bitmap.getWidth() - nX - 1, width)), nHeight = Math.max(1, Math.min(bitmap.getHeight() - nY - 1, height)); Bitmap b = Bitmap.createBitmap(bitmap, nX, nY, nWidth, nHeight); if(bitmap != b) bitmap.recycle(); return b; } @Override public String key() { return "crop-" + id; } }
{ "redpajama_set_name": "RedPajamaGithub" }
6,053
Q: Update multiple object values inside array in MongoDB with different values without looping, but pure query I want to update this document in MongoDB [ { "persons" : [ { "name":"Javier", "occupation":"teacher" }, { "name":"Juliana", "occupation":"novelist" }, { "name":"Henry", "occupation":"plumber" } ] } ] let's say I have a global array variable that contains this var peopleGlobalVar = [ { "name":"Javier", "occupation":"gardener" }, { "name":"Henry", "occupation":"postman" } ] I want to update the document with the value in globalVar WITHOUT common javascript looping, so if the names match, the occupation value will change, and I'm looking for the "pure query" solution. the code I expect: collection.update( {}, { $set: { // code ... } }, { $arrayFilter: [ { "elem.name": { $in: $function : { body: function(updatedPeopleFromGlobalVariable){ let names = []; updatedPeopleFromGlobalVariable.forEach(function(person){ names.push(person.name); return names; }, args: peopleGlobalVar, lang:"js", } }, "multi": true } ] } ) What the output should be [ { "persons" : [ { "name":"Javier", "occupation":"gardener" }, { "name":"Juliana", "occupation":"novelist" }, { "name":"Henry", "occupation":"postman" } ] } ] Anyone have the solution to this?
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,655
Щирець — річка Щирець — селище міського типу, Львівська область, Пустомитівський район Щирець — колишнє село Равського повіту, знищене при створенні Яворівського полігону.
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,993
Charity shares stay safe message with Kenilworth school pupils Laura Mitchell and Sam Pater (headteacher) at Clinton Primary School in Kenilworth Karen Almond Children at a Kenilworth primary school have been taught how to keep themselves safe from abuse and neglect – and the NSPCC is looking for volunteers to help deliver this vital service at schools across Warwickshire. The NSPCC programme, called Speak Out. Stay Safe has reached more than 10,000 children in Warwickshire so far this school year, new figures reveal. The charity is looking for volunteers to help run the service – while also urging schools to sign up for a visit. Clinton Primary School in Kenilworth was visited by the NSPCC's Schools Service with pupils from reception upwards attending a special assembly delivered by NSPCC volunteer Laura Mitchell, who is also a safeguarding governor at the school. During the 2018/19 autumn and spring terms, volunteers from the child protection charity reached 10,359 children in Warwickshire with Speak Out. Stay Safe, which is designed to give them the knowledge they need to stay safe from abuse and who they can turn to for help. But the NSPCC wants to ensure every primary school pupil in Warwickshire has the chance to hear these vital messages – so schools across the county are being urged to sign up. The service also needs more volunteers to take part in Speak Out. Stay Safe and help give a generation of children the understanding they need to stay safe from abuse and neglect. Pupils are taught by specially trained volunteers, along with speech bubble mascot Buddy, to speak out if they are worried, either to a trusted adult or Childline. Through age-appropriate, interactive assemblies and workshops children are empowered to recognise the different types of abuse, and understand how to protect themselves. Laura Mitchell, whose children attend Clinton Primary School, said: "We've had a fantastic response from schools in Warwickshire but we want to ensure we get these vital messages to as many pupils as possible. "We can only do that by recruiting volunteers and visiting even more primary schools – so we would urge any that haven't been visited by us to get in touch. "The service is completely free and helps children understand some really important issues in an age-appropriate way. "Children have spoken out about abuse as a result of our volunteers delivering this programme in primary schools, so we know how vitally important the information we deliver is. "Our volunteers give up their time to make a difference in the lives of children, and it's very rewarding knowing that you have helped a child. All our volunteers are given extensive training to help them deliver the assemblies in the most child friendly way." The NSPCC's Speak Out. Stay Safe programme is available to all primary schools in the UK. In 2017/18 the schools programme visited over 8,000 schools and spoke to 1.8 million children. To find out more, and request a visit for your school, go to www.nspcc.org.uk/speakout. Or you can contact Natasha Turberville onnatasha.turberville@nspccorg.uk or 07500785029. To volunteer, or to find out more, visit www.nspcc.org.uk/what-you-can-do or contact Natasha on the details above.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,004
Why is my Chihuahua such a finicky eater? It's because we can be. Why eat the same old thing every day when, with a little effort, you can convince your "caretaker" that you need variety. If you ate your dog food without any problems you wouldn't be getting any tasty add-ins. This works very well with my daddy. I was eating the dry dog food. Mostly. Sometimes I just wasn't that hungry. But I can tell that worried him. So he added some other dog food to my dry. Then when I grew tired of that, he added some dehydrated chicken pieces. Then occasionally dried beef pieces. All healthy dog treats. Eventually I got canned food mixed with my dry. Now on top of that I get rice added in, carrots, apples, cauliflower, sauce, pieces of treats, even some cereal. My main diet is human grade Merrick Turducken. There's duck, chicken, and turkey. Who can ask for a better diet. But if I can get more, I say why not. Some doctors say you should never give your dog people food. Dog food is made for a balanced diet. My daddy agrees, which is why he never gives me meat. It's also why he adds in only a small bit of other doggie safe veggies and treats. But I fear he's growing tired of that and might just make me eat only dog food. Pity. Guess I'll have to work harder at it. Hello … I just discovered this blog recently. I definetly love reading your articles. I just add to favorites. Thank you for sharing. I came across this blog by using bing. Good to know that they knows how to display quality blogs and not horrible websites. Have a happy new year. We aim to be the premiere Chihuahua website. We're here to help.
{ "redpajama_set_name": "RedPajamaC4" }
7,322
#include "dm.h" #include "dm-uevent.h" #include <linux/init.h> #include <linux/module.h> #include <linux/mutex.h> #include <linux/moduleparam.h> #include <linux/blkpg.h> #include <linux/bio.h> #include <linux/mempool.h> #include <linux/slab.h> #include <linux/idr.h> #include <linux/hdreg.h> #include <linux/delay.h> #include <linux/wait.h> #include <linux/kthread.h> #include <linux/ktime.h> #include <linux/elevator.h> /* for rq_end_sector() */ #include <linux/blk-mq.h> #include <linux/pr.h> #include <trace/events/block.h> #define DM_MSG_PREFIX "core" #ifdef CONFIG_PRINTK /* * ratelimit state to be used in DMXXX_LIMIT(). */ DEFINE_RATELIMIT_STATE(dm_ratelimit_state, DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST); EXPORT_SYMBOL(dm_ratelimit_state); #endif /* * Cookies are numeric values sent with CHANGE and REMOVE * uevents while resuming, removing or renaming the device. */ #define DM_COOKIE_ENV_VAR_NAME "DM_COOKIE" #define DM_COOKIE_LENGTH 24 static const char *_name = DM_NAME; static unsigned int major = 0; static unsigned int _major = 0; static DEFINE_IDR(_minor_idr); static DEFINE_SPINLOCK(_minor_lock); static void do_deferred_remove(struct work_struct *w); static DECLARE_WORK(deferred_remove_work, do_deferred_remove); static struct workqueue_struct *deferred_remove_workqueue; /* * For bio-based dm. * One of these is allocated per bio. */ struct dm_io { struct mapped_device *md; int error; atomic_t io_count; struct bio *bio; unsigned long start_time; spinlock_t endio_lock; struct dm_stats_aux stats_aux; }; /* * For request-based dm. * One of these is allocated per request. */ struct dm_rq_target_io { struct mapped_device *md; struct dm_target *ti; struct request *orig, *clone; struct kthread_work work; int error; union map_info info; struct dm_stats_aux stats_aux; unsigned long duration_jiffies; unsigned n_sectors; }; /* * For request-based dm - the bio clones we allocate are embedded in these * structs. * * We allocate these with bio_alloc_bioset, using the front_pad parameter when * the bioset is created - this means the bio has to come at the end of the * struct. */ struct dm_rq_clone_bio_info { struct bio *orig; struct dm_rq_target_io *tio; struct bio clone; }; #define MINOR_ALLOCED ((void *)-1) /* * Bits for the md->flags field. */ #define DMF_BLOCK_IO_FOR_SUSPEND 0 #define DMF_SUSPENDED 1 #define DMF_FROZEN 2 #define DMF_FREEING 3 #define DMF_DELETING 4 #define DMF_NOFLUSH_SUSPENDING 5 #define DMF_DEFERRED_REMOVE 6 #define DMF_SUSPENDED_INTERNALLY 7 /* * Work processed by per-device workqueue. */ struct mapped_device { struct srcu_struct io_barrier; struct mutex suspend_lock; /* * The current mapping (struct dm_table *). * Use dm_get_live_table{_fast} or take suspend_lock for * dereference. */ void __rcu *map; struct list_head table_devices; struct mutex table_devices_lock; unsigned long flags; struct request_queue *queue; int numa_node_id; unsigned type; /* Protect queue and type against concurrent access. */ struct mutex type_lock; atomic_t holders; atomic_t open_count; struct dm_target *immutable_target; struct target_type *immutable_target_type; struct gendisk *disk; char name[16]; void *interface_ptr; /* * A list of ios that arrived while we were suspended. */ atomic_t pending[2]; wait_queue_head_t wait; struct work_struct work; spinlock_t deferred_lock; struct bio_list deferred; /* * Event handling. */ wait_queue_head_t eventq; atomic_t event_nr; atomic_t uevent_seq; struct list_head uevent_list; spinlock_t uevent_lock; /* Protect access to uevent_list */ /* the number of internal suspends */ unsigned internal_suspend_count; /* * Processing queue (flush) */ struct workqueue_struct *wq; /* * io objects are allocated from here. */ mempool_t *io_pool; mempool_t *rq_pool; struct bio_set *bs; /* * freeze/thaw support require holding onto a super block */ struct super_block *frozen_sb; /* forced geometry settings */ struct hd_geometry geometry; struct block_device *bdev; /* kobject and completion */ struct dm_kobject_holder kobj_holder; /* zero-length flush that will be cloned and submitted to targets */ struct bio flush_bio; struct dm_stats stats; struct kthread_worker kworker; struct task_struct *kworker_task; /* for request-based merge heuristic in dm_request_fn() */ unsigned seq_rq_merge_deadline_usecs; int last_rq_rw; sector_t last_rq_pos; ktime_t last_rq_start_time; /* for blk-mq request-based DM support */ struct blk_mq_tag_set *tag_set; bool use_blk_mq:1; bool init_tio_pdu:1; }; #ifdef CONFIG_DM_MQ_DEFAULT static bool use_blk_mq = true; #else static bool use_blk_mq = false; #endif #define DM_MQ_NR_HW_QUEUES 1 #define DM_MQ_QUEUE_DEPTH 2048 #define DM_NUMA_NODE NUMA_NO_NODE static unsigned dm_mq_nr_hw_queues = DM_MQ_NR_HW_QUEUES; static unsigned dm_mq_queue_depth = DM_MQ_QUEUE_DEPTH; static int dm_numa_node = DM_NUMA_NODE; bool dm_use_blk_mq(struct mapped_device *md) { return md->use_blk_mq; } EXPORT_SYMBOL_GPL(dm_use_blk_mq); /* * For mempools pre-allocation at the table loading time. */ struct dm_md_mempools { mempool_t *io_pool; mempool_t *rq_pool; struct bio_set *bs; }; struct table_device { struct list_head list; atomic_t count; struct dm_dev dm_dev; }; #define RESERVED_BIO_BASED_IOS 16 #define RESERVED_REQUEST_BASED_IOS 256 #define RESERVED_MAX_IOS 1024 static struct kmem_cache *_io_cache; static struct kmem_cache *_rq_tio_cache; static struct kmem_cache *_rq_cache; /* * Bio-based DM's mempools' reserved IOs set by the user. */ static unsigned reserved_bio_based_ios = RESERVED_BIO_BASED_IOS; /* * Request-based DM's mempools' reserved IOs set by the user. */ static unsigned reserved_rq_based_ios = RESERVED_REQUEST_BASED_IOS; static int __dm_get_module_param_int(int *module_param, int min, int max) { int param = ACCESS_ONCE(*module_param); int modified_param = 0; bool modified = true; if (param < min) modified_param = min; else if (param > max) modified_param = max; else modified = false; if (modified) { (void)cmpxchg(module_param, param, modified_param); param = modified_param; } return param; } static unsigned __dm_get_module_param(unsigned *module_param, unsigned def, unsigned max) { unsigned param = ACCESS_ONCE(*module_param); unsigned modified_param = 0; if (!param) modified_param = def; else if (param > max) modified_param = max; if (modified_param) { (void)cmpxchg(module_param, param, modified_param); param = modified_param; } return param; } unsigned dm_get_reserved_bio_based_ios(void) { return __dm_get_module_param(&reserved_bio_based_ios, RESERVED_BIO_BASED_IOS, RESERVED_MAX_IOS); } EXPORT_SYMBOL_GPL(dm_get_reserved_bio_based_ios); unsigned dm_get_reserved_rq_based_ios(void) { return __dm_get_module_param(&reserved_rq_based_ios, RESERVED_REQUEST_BASED_IOS, RESERVED_MAX_IOS); } EXPORT_SYMBOL_GPL(dm_get_reserved_rq_based_ios); static unsigned dm_get_blk_mq_nr_hw_queues(void) { return __dm_get_module_param(&dm_mq_nr_hw_queues, 1, 32); } static unsigned dm_get_blk_mq_queue_depth(void) { return __dm_get_module_param(&dm_mq_queue_depth, DM_MQ_QUEUE_DEPTH, BLK_MQ_MAX_DEPTH); } static unsigned dm_get_numa_node(void) { return __dm_get_module_param_int(&dm_numa_node, DM_NUMA_NODE, num_online_nodes() - 1); } static int __init local_init(void) { int r = -ENOMEM; /* allocate a slab for the dm_ios */ _io_cache = KMEM_CACHE(dm_io, 0); if (!_io_cache) return r; _rq_tio_cache = KMEM_CACHE(dm_rq_target_io, 0); if (!_rq_tio_cache) goto out_free_io_cache; _rq_cache = kmem_cache_create("dm_old_clone_request", sizeof(struct request), __alignof__(struct request), 0, NULL); if (!_rq_cache) goto out_free_rq_tio_cache; r = dm_uevent_init(); if (r) goto out_free_rq_cache; deferred_remove_workqueue = alloc_workqueue("kdmremove", WQ_UNBOUND, 1); if (!deferred_remove_workqueue) { r = -ENOMEM; goto out_uevent_exit; } _major = major; r = register_blkdev(_major, _name); if (r < 0) goto out_free_workqueue; if (!_major) _major = r; return 0; out_free_workqueue: destroy_workqueue(deferred_remove_workqueue); out_uevent_exit: dm_uevent_exit(); out_free_rq_cache: kmem_cache_destroy(_rq_cache); out_free_rq_tio_cache: kmem_cache_destroy(_rq_tio_cache); out_free_io_cache: kmem_cache_destroy(_io_cache); return r; } static void local_exit(void) { flush_scheduled_work(); destroy_workqueue(deferred_remove_workqueue); kmem_cache_destroy(_rq_cache); kmem_cache_destroy(_rq_tio_cache); kmem_cache_destroy(_io_cache); unregister_blkdev(_major, _name); dm_uevent_exit(); _major = 0; DMINFO("cleaned up"); } static int (*_inits[])(void) __initdata = { local_init, dm_target_init, dm_linear_init, dm_stripe_init, dm_io_init, dm_kcopyd_init, dm_interface_init, dm_statistics_init, }; static void (*_exits[])(void) = { local_exit, dm_target_exit, dm_linear_exit, dm_stripe_exit, dm_io_exit, dm_kcopyd_exit, dm_interface_exit, dm_statistics_exit, }; static int __init dm_init(void) { const int count = ARRAY_SIZE(_inits); int r, i; for (i = 0; i < count; i++) { r = _inits[i](); if (r) goto bad; } return 0; bad: while (i--) _exits[i](); return r; } static void __exit dm_exit(void) { int i = ARRAY_SIZE(_exits); while (i--) _exits[i](); /* * Should be empty by this point. */ idr_destroy(&_minor_idr); } /* * Block device functions */ int dm_deleting_md(struct mapped_device *md) { return test_bit(DMF_DELETING, &md->flags); } static int dm_blk_open(struct block_device *bdev, fmode_t mode) { struct mapped_device *md; spin_lock(&_minor_lock); md = bdev->bd_disk->private_data; if (!md) goto out; if (test_bit(DMF_FREEING, &md->flags) || dm_deleting_md(md)) { md = NULL; goto out; } dm_get(md); atomic_inc(&md->open_count); out: spin_unlock(&_minor_lock); return md ? 0 : -ENXIO; } static void dm_blk_close(struct gendisk *disk, fmode_t mode) { struct mapped_device *md; spin_lock(&_minor_lock); md = disk->private_data; if (WARN_ON(!md)) goto out; if (atomic_dec_and_test(&md->open_count) && (test_bit(DMF_DEFERRED_REMOVE, &md->flags))) queue_work(deferred_remove_workqueue, &deferred_remove_work); dm_put(md); out: spin_unlock(&_minor_lock); } int dm_open_count(struct mapped_device *md) { return atomic_read(&md->open_count); } /* * Guarantees nothing is using the device before it's deleted. */ int dm_lock_for_deletion(struct mapped_device *md, bool mark_deferred, bool only_deferred) { int r = 0; spin_lock(&_minor_lock); if (dm_open_count(md)) { r = -EBUSY; if (mark_deferred) set_bit(DMF_DEFERRED_REMOVE, &md->flags); } else if (only_deferred && !test_bit(DMF_DEFERRED_REMOVE, &md->flags)) r = -EEXIST; else set_bit(DMF_DELETING, &md->flags); spin_unlock(&_minor_lock); return r; } int dm_cancel_deferred_remove(struct mapped_device *md) { int r = 0; spin_lock(&_minor_lock); if (test_bit(DMF_DELETING, &md->flags)) r = -EBUSY; else clear_bit(DMF_DEFERRED_REMOVE, &md->flags); spin_unlock(&_minor_lock); return r; } static void do_deferred_remove(struct work_struct *w) { dm_deferred_remove(); } sector_t dm_get_size(struct mapped_device *md) { return get_capacity(md->disk); } struct request_queue *dm_get_md_queue(struct mapped_device *md) { return md->queue; } struct dm_stats *dm_get_stats(struct mapped_device *md) { return &md->stats; } static int dm_blk_getgeo(struct block_device *bdev, struct hd_geometry *geo) { struct mapped_device *md = bdev->bd_disk->private_data; return dm_get_geometry(md, geo); } static int dm_grab_bdev_for_ioctl(struct mapped_device *md, struct block_device **bdev, fmode_t *mode) { struct dm_target *tgt; struct dm_table *map; int srcu_idx, r; retry: r = -ENOTTY; map = dm_get_live_table(md, &srcu_idx); if (!map || !dm_table_get_size(map)) goto out; /* We only support devices that have a single target */ if (dm_table_get_num_targets(map) != 1) goto out; tgt = dm_table_get_target(map, 0); if (!tgt->type->prepare_ioctl) goto out; if (dm_suspended_md(md)) { r = -EAGAIN; goto out; } r = tgt->type->prepare_ioctl(tgt, bdev, mode); if (r < 0) goto out; bdgrab(*bdev); dm_put_live_table(md, srcu_idx); return r; out: dm_put_live_table(md, srcu_idx); if (r == -ENOTCONN && !fatal_signal_pending(current)) { msleep(10); goto retry; } return r; } static int dm_blk_ioctl(struct block_device *bdev, fmode_t mode, unsigned int cmd, unsigned long arg) { struct mapped_device *md = bdev->bd_disk->private_data; int r; r = dm_grab_bdev_for_ioctl(md, &bdev, &mode); if (r < 0) return r; if (r > 0) { /* * Target determined this ioctl is being issued against * a logical partition of the parent bdev; so extra * validation is needed. */ r = scsi_verify_blk_ioctl(NULL, cmd); if (r) goto out; } r = __blkdev_driver_ioctl(bdev, mode, cmd, arg); out: bdput(bdev); return r; } static struct dm_io *alloc_io(struct mapped_device *md) { return mempool_alloc(md->io_pool, GFP_NOIO); } static void free_io(struct mapped_device *md, struct dm_io *io) { mempool_free(io, md->io_pool); } static void free_tio(struct dm_target_io *tio) { bio_put(&tio->clone); } static struct dm_rq_target_io *alloc_old_rq_tio(struct mapped_device *md, gfp_t gfp_mask) { return mempool_alloc(md->io_pool, gfp_mask); } static void free_old_rq_tio(struct dm_rq_target_io *tio) { mempool_free(tio, tio->md->io_pool); } static struct request *alloc_old_clone_request(struct mapped_device *md, gfp_t gfp_mask) { return mempool_alloc(md->rq_pool, gfp_mask); } static void free_old_clone_request(struct mapped_device *md, struct request *rq) { mempool_free(rq, md->rq_pool); } static int md_in_flight(struct mapped_device *md) { return atomic_read(&md->pending[READ]) + atomic_read(&md->pending[WRITE]); } static void start_io_acct(struct dm_io *io) { struct mapped_device *md = io->md; struct bio *bio = io->bio; int cpu; int rw = bio_data_dir(bio); io->start_time = jiffies; cpu = part_stat_lock(); part_round_stats(cpu, &dm_disk(md)->part0); part_stat_unlock(); atomic_set(&dm_disk(md)->part0.in_flight[rw], atomic_inc_return(&md->pending[rw])); if (unlikely(dm_stats_used(&md->stats))) dm_stats_account_io(&md->stats, bio->bi_rw, bio->bi_iter.bi_sector, bio_sectors(bio), false, 0, &io->stats_aux); } static void end_io_acct(struct dm_io *io) { struct mapped_device *md = io->md; struct bio *bio = io->bio; unsigned long duration = jiffies - io->start_time; int pending; int rw = bio_data_dir(bio); generic_end_io_acct(rw, &dm_disk(md)->part0, io->start_time); if (unlikely(dm_stats_used(&md->stats))) dm_stats_account_io(&md->stats, bio->bi_rw, bio->bi_iter.bi_sector, bio_sectors(bio), true, duration, &io->stats_aux); /* * After this is decremented the bio must not be touched if it is * a flush. */ pending = atomic_dec_return(&md->pending[rw]); atomic_set(&dm_disk(md)->part0.in_flight[rw], pending); pending += atomic_read(&md->pending[rw^0x1]); /* nudge anyone waiting on suspend queue */ if (!pending) wake_up(&md->wait); } /* * Add the bio to the list of deferred io. */ static void queue_io(struct mapped_device *md, struct bio *bio) { unsigned long flags; spin_lock_irqsave(&md->deferred_lock, flags); bio_list_add(&md->deferred, bio); spin_unlock_irqrestore(&md->deferred_lock, flags); queue_work(md->wq, &md->work); } /* * Everyone (including functions in this file), should use this * function to access the md->map field, and make sure they call * dm_put_live_table() when finished. */ struct dm_table *dm_get_live_table(struct mapped_device *md, int *srcu_idx) __acquires(md->io_barrier) { *srcu_idx = srcu_read_lock(&md->io_barrier); return srcu_dereference(md->map, &md->io_barrier); } void dm_put_live_table(struct mapped_device *md, int srcu_idx) __releases(md->io_barrier) { srcu_read_unlock(&md->io_barrier, srcu_idx); } void dm_sync_table(struct mapped_device *md) { synchronize_srcu(&md->io_barrier); synchronize_rcu_expedited(); } /* * A fast alternative to dm_get_live_table/dm_put_live_table. * The caller must not block between these two functions. */ static struct dm_table *dm_get_live_table_fast(struct mapped_device *md) __acquires(RCU) { rcu_read_lock(); return rcu_dereference(md->map); } static void dm_put_live_table_fast(struct mapped_device *md) __releases(RCU) { rcu_read_unlock(); } /* * Open a table device so we can use it as a map destination. */ static int open_table_device(struct table_device *td, dev_t dev, struct mapped_device *md) { static char *_claim_ptr = "I belong to device-mapper"; struct block_device *bdev; int r; BUG_ON(td->dm_dev.bdev); bdev = blkdev_get_by_dev(dev, td->dm_dev.mode | FMODE_EXCL, _claim_ptr); if (IS_ERR(bdev)) return PTR_ERR(bdev); r = bd_link_disk_holder(bdev, dm_disk(md)); if (r) { blkdev_put(bdev, td->dm_dev.mode | FMODE_EXCL); return r; } td->dm_dev.bdev = bdev; return 0; } /* * Close a table device that we've been using. */ static void close_table_device(struct table_device *td, struct mapped_device *md) { if (!td->dm_dev.bdev) return; bd_unlink_disk_holder(td->dm_dev.bdev, dm_disk(md)); blkdev_put(td->dm_dev.bdev, td->dm_dev.mode | FMODE_EXCL); td->dm_dev.bdev = NULL; } static struct table_device *find_table_device(struct list_head *l, dev_t dev, fmode_t mode) { struct table_device *td; list_for_each_entry(td, l, list) if (td->dm_dev.bdev->bd_dev == dev && td->dm_dev.mode == mode) return td; return NULL; } int dm_get_table_device(struct mapped_device *md, dev_t dev, fmode_t mode, struct dm_dev **result) { int r; struct table_device *td; mutex_lock(&md->table_devices_lock); td = find_table_device(&md->table_devices, dev, mode); if (!td) { td = kmalloc_node(sizeof(*td), GFP_KERNEL, md->numa_node_id); if (!td) { mutex_unlock(&md->table_devices_lock); return -ENOMEM; } td->dm_dev.mode = mode; td->dm_dev.bdev = NULL; if ((r = open_table_device(td, dev, md))) { mutex_unlock(&md->table_devices_lock); kfree(td); return r; } format_dev_t(td->dm_dev.name, dev); atomic_set(&td->count, 0); list_add(&td->list, &md->table_devices); } atomic_inc(&td->count); mutex_unlock(&md->table_devices_lock); *result = &td->dm_dev; return 0; } EXPORT_SYMBOL_GPL(dm_get_table_device); void dm_put_table_device(struct mapped_device *md, struct dm_dev *d) { struct table_device *td = container_of(d, struct table_device, dm_dev); mutex_lock(&md->table_devices_lock); if (atomic_dec_and_test(&td->count)) { close_table_device(td, md); list_del(&td->list); kfree(td); } mutex_unlock(&md->table_devices_lock); } EXPORT_SYMBOL(dm_put_table_device); static void free_table_devices(struct list_head *devices) { struct list_head *tmp, *next; list_for_each_safe(tmp, next, devices) { struct table_device *td = list_entry(tmp, struct table_device, list); DMWARN("dm_destroy: %s still exists with %d references", td->dm_dev.name, atomic_read(&td->count)); kfree(td); } } /* * Get the geometry associated with a dm device */ int dm_get_geometry(struct mapped_device *md, struct hd_geometry *geo) { *geo = md->geometry; return 0; } /* * Set the geometry of a device. */ int dm_set_geometry(struct mapped_device *md, struct hd_geometry *geo) { sector_t sz = (sector_t)geo->cylinders * geo->heads * geo->sectors; if (geo->start > sz) { DMWARN("Start sector is beyond the geometry limits."); return -EINVAL; } md->geometry = *geo; return 0; } /*----------------------------------------------------------------- * CRUD START: * A more elegant soln is in the works that uses the queue * merge fn, unfortunately there are a couple of changes to * the block layer that I want to make for this. So in the * interests of getting something for people to use I give * you this clearly demarcated crap. *---------------------------------------------------------------*/ static int __noflush_suspending(struct mapped_device *md) { return test_bit(DMF_NOFLUSH_SUSPENDING, &md->flags); } /* * Decrements the number of outstanding ios that a bio has been * cloned into, completing the original io if necc. */ static void dec_pending(struct dm_io *io, int error) { unsigned long flags; int io_error; struct bio *bio; struct mapped_device *md = io->md; /* Push-back supersedes any I/O errors */ if (unlikely(error)) { spin_lock_irqsave(&io->endio_lock, flags); if (!(io->error > 0 && __noflush_suspending(md))) io->error = error; spin_unlock_irqrestore(&io->endio_lock, flags); } if (atomic_dec_and_test(&io->io_count)) { if (io->error == DM_ENDIO_REQUEUE) { /* * Target requested pushing back the I/O. */ spin_lock_irqsave(&md->deferred_lock, flags); if (__noflush_suspending(md)) bio_list_add_head(&md->deferred, io->bio); else /* noflush suspend was interrupted. */ io->error = -EIO; spin_unlock_irqrestore(&md->deferred_lock, flags); } io_error = io->error; bio = io->bio; end_io_acct(io); free_io(md, io); if (io_error == DM_ENDIO_REQUEUE) return; if ((bio->bi_rw & REQ_FLUSH) && bio->bi_iter.bi_size) { /* * Preflush done for flush with data, reissue * without REQ_FLUSH. */ bio->bi_rw &= ~REQ_FLUSH; queue_io(md, bio); } else { /* done with normal IO or empty flush */ trace_block_bio_complete(md->queue, bio, io_error); bio->bi_error = io_error; bio_endio(bio); } } } static void disable_write_same(struct mapped_device *md) { struct queue_limits *limits = dm_get_queue_limits(md); /* device doesn't really support WRITE SAME, disable it */ limits->max_write_same_sectors = 0; } static void clone_endio(struct bio *bio) { int error = bio->bi_error; int r = error; struct dm_target_io *tio = container_of(bio, struct dm_target_io, clone); struct dm_io *io = tio->io; struct mapped_device *md = tio->io->md; dm_endio_fn endio = tio->ti->type->end_io; if (endio) { r = endio(tio->ti, bio, error); if (r < 0 || r == DM_ENDIO_REQUEUE) /* * error and requeue request are handled * in dec_pending(). */ error = r; else if (r == DM_ENDIO_INCOMPLETE) /* The target will handle the io */ return; else if (r) { DMWARN("unimplemented target endio return value: %d", r); BUG(); } } if (unlikely(r == -EREMOTEIO && (bio->bi_rw & REQ_WRITE_SAME) && !bdev_get_queue(bio->bi_bdev)->limits.max_write_same_sectors)) disable_write_same(md); free_tio(tio); dec_pending(io, error); } /* * Partial completion handling for request-based dm */ static void end_clone_bio(struct bio *clone) { struct dm_rq_clone_bio_info *info = container_of(clone, struct dm_rq_clone_bio_info, clone); struct dm_rq_target_io *tio = info->tio; struct bio *bio = info->orig; unsigned int nr_bytes = info->orig->bi_iter.bi_size; int error = clone->bi_error; bio_put(clone); if (tio->error) /* * An error has already been detected on the request. * Once error occurred, just let clone->end_io() handle * the remainder. */ return; else if (error) { /* * Don't notice the error to the upper layer yet. * The error handling decision is made by the target driver, * when the request is completed. */ tio->error = error; return; } /* * I/O for the bio successfully completed. * Notice the data completion to the upper layer. */ /* * bios are processed from the head of the list. * So the completing bio should always be rq->bio. * If it's not, something wrong is happening. */ if (tio->orig->bio != bio) DMERR("bio completion is going in the middle of the request"); /* * Update the original request. * Do not use blk_end_request() here, because it may complete * the original request before the clone, and break the ordering. */ blk_update_request(tio->orig, 0, nr_bytes); } static struct dm_rq_target_io *tio_from_request(struct request *rq) { return (rq->q->mq_ops ? blk_mq_rq_to_pdu(rq) : rq->special); } static void rq_end_stats(struct mapped_device *md, struct request *orig) { if (unlikely(dm_stats_used(&md->stats))) { struct dm_rq_target_io *tio = tio_from_request(orig); tio->duration_jiffies = jiffies - tio->duration_jiffies; dm_stats_account_io(&md->stats, orig->cmd_flags, blk_rq_pos(orig), tio->n_sectors, true, tio->duration_jiffies, &tio->stats_aux); } } /* * Don't touch any member of the md after calling this function because * the md may be freed in dm_put() at the end of this function. * Or do dm_get() before calling this function and dm_put() later. */ static void rq_completed(struct mapped_device *md, int rw, bool run_queue) { atomic_dec(&md->pending[rw]); /* nudge anyone waiting on suspend queue */ if (!md_in_flight(md)) wake_up(&md->wait); /* * Run this off this callpath, as drivers could invoke end_io while * inside their request_fn (and holding the queue lock). Calling * back into ->request_fn() could deadlock attempting to grab the * queue lock again. */ if (!md->queue->mq_ops && run_queue) blk_run_queue_async(md->queue); /* * dm_put() must be at the end of this function. See the comment above */ dm_put(md); } static void free_rq_clone(struct request *clone) { struct dm_rq_target_io *tio = clone->end_io_data; struct mapped_device *md = tio->md; blk_rq_unprep_clone(clone); if (md->type == DM_TYPE_MQ_REQUEST_BASED) /* stacked on blk-mq queue(s) */ tio->ti->type->release_clone_rq(clone); else if (!md->queue->mq_ops) /* request_fn queue stacked on request_fn queue(s) */ free_old_clone_request(md, clone); if (!md->queue->mq_ops) free_old_rq_tio(tio); } /* * Complete the clone and the original request. * Must be called without clone's queue lock held, * see end_clone_request() for more details. */ static void dm_end_request(struct request *clone, int error) { int rw = rq_data_dir(clone); struct dm_rq_target_io *tio = clone->end_io_data; struct mapped_device *md = tio->md; struct request *rq = tio->orig; if (rq->cmd_type == REQ_TYPE_BLOCK_PC) { rq->errors = clone->errors; rq->resid_len = clone->resid_len; if (rq->sense) /* * We are using the sense buffer of the original * request. * So setting the length of the sense data is enough. */ rq->sense_len = clone->sense_len; } free_rq_clone(clone); rq_end_stats(md, rq); if (!rq->q->mq_ops) blk_end_request_all(rq, error); else blk_mq_end_request(rq, error); rq_completed(md, rw, true); } static void dm_unprep_request(struct request *rq) { struct dm_rq_target_io *tio = tio_from_request(rq); struct request *clone = tio->clone; if (!rq->q->mq_ops) { rq->special = NULL; rq->cmd_flags &= ~REQ_DONTPREP; } if (clone) free_rq_clone(clone); else if (!tio->md->queue->mq_ops) free_old_rq_tio(tio); } /* * Requeue the original request of a clone. */ static void dm_old_requeue_request(struct request *rq) { struct request_queue *q = rq->q; unsigned long flags; spin_lock_irqsave(q->queue_lock, flags); blk_requeue_request(q, rq); blk_run_queue_async(q); spin_unlock_irqrestore(q->queue_lock, flags); } static void dm_mq_requeue_request(struct request *rq) { struct request_queue *q = rq->q; unsigned long flags; blk_mq_requeue_request(rq); spin_lock_irqsave(q->queue_lock, flags); if (!blk_queue_stopped(q)) blk_mq_kick_requeue_list(q); spin_unlock_irqrestore(q->queue_lock, flags); } static void dm_requeue_original_request(struct mapped_device *md, struct request *rq) { int rw = rq_data_dir(rq); rq_end_stats(md, rq); dm_unprep_request(rq); if (!rq->q->mq_ops) dm_old_requeue_request(rq); else dm_mq_requeue_request(rq); rq_completed(md, rw, false); } static void dm_old_stop_queue(struct request_queue *q) { unsigned long flags; spin_lock_irqsave(q->queue_lock, flags); if (blk_queue_stopped(q)) { spin_unlock_irqrestore(q->queue_lock, flags); return; } blk_stop_queue(q); spin_unlock_irqrestore(q->queue_lock, flags); } static void dm_stop_queue(struct request_queue *q) { if (!q->mq_ops) dm_old_stop_queue(q); else blk_mq_stop_hw_queues(q); } static void dm_old_start_queue(struct request_queue *q) { unsigned long flags; spin_lock_irqsave(q->queue_lock, flags); if (blk_queue_stopped(q)) blk_start_queue(q); spin_unlock_irqrestore(q->queue_lock, flags); } static void dm_start_queue(struct request_queue *q) { if (!q->mq_ops) dm_old_start_queue(q); else { blk_mq_start_stopped_hw_queues(q, true); blk_mq_kick_requeue_list(q); } } static void dm_done(struct request *clone, int error, bool mapped) { int r = error; struct dm_rq_target_io *tio = clone->end_io_data; dm_request_endio_fn rq_end_io = NULL; if (tio->ti) { rq_end_io = tio->ti->type->rq_end_io; if (mapped && rq_end_io) r = rq_end_io(tio->ti, clone, error, &tio->info); } if (unlikely(r == -EREMOTEIO && (clone->cmd_flags & REQ_WRITE_SAME) && !clone->q->limits.max_write_same_sectors)) disable_write_same(tio->md); if (r <= 0) /* The target wants to complete the I/O */ dm_end_request(clone, r); else if (r == DM_ENDIO_INCOMPLETE) /* The target will handle the I/O */ return; else if (r == DM_ENDIO_REQUEUE) /* The target wants to requeue the I/O */ dm_requeue_original_request(tio->md, tio->orig); else { DMWARN("unimplemented target endio return value: %d", r); BUG(); } } /* * Request completion handler for request-based dm */ static void dm_softirq_done(struct request *rq) { bool mapped = true; struct dm_rq_target_io *tio = tio_from_request(rq); struct request *clone = tio->clone; int rw; if (!clone) { rq_end_stats(tio->md, rq); rw = rq_data_dir(rq); if (!rq->q->mq_ops) { blk_end_request_all(rq, tio->error); rq_completed(tio->md, rw, false); free_old_rq_tio(tio); } else { blk_mq_end_request(rq, tio->error); rq_completed(tio->md, rw, false); } return; } if (rq->cmd_flags & REQ_FAILED) mapped = false; dm_done(clone, tio->error, mapped); } /* * Complete the clone and the original request with the error status * through softirq context. */ static void dm_complete_request(struct request *rq, int error) { struct dm_rq_target_io *tio = tio_from_request(rq); tio->error = error; if (!rq->q->mq_ops) blk_complete_request(rq); else blk_mq_complete_request(rq, error); } /* * Complete the not-mapped clone and the original request with the error status * through softirq context. * Target's rq_end_io() function isn't called. * This may be used when the target's map_rq() or clone_and_map_rq() functions fail. */ static void dm_kill_unmapped_request(struct request *rq, int error) { rq->cmd_flags |= REQ_FAILED; dm_complete_request(rq, error); } /* * Called with the clone's queue lock held (in the case of .request_fn) */ static void end_clone_request(struct request *clone, int error) { struct dm_rq_target_io *tio = clone->end_io_data; if (!clone->q->mq_ops) { /* * For just cleaning up the information of the queue in which * the clone was dispatched. * The clone is *NOT* freed actually here because it is alloced * from dm own mempool (REQ_ALLOCED isn't set). */ __blk_put_request(clone->q, clone); } /* * Actual request completion is done in a softirq context which doesn't * hold the clone's queue lock. Otherwise, deadlock could occur because: * - another request may be submitted by the upper level driver * of the stacking during the completion * - the submission which requires queue lock may be done * against this clone's queue */ dm_complete_request(tio->orig, error); } /* * Return maximum size of I/O possible at the supplied sector up to the current * target boundary. */ static sector_t max_io_len_target_boundary(sector_t sector, struct dm_target *ti) { sector_t target_offset = dm_target_offset(ti, sector); return ti->len - target_offset; } static sector_t max_io_len(sector_t sector, struct dm_target *ti) { sector_t len = max_io_len_target_boundary(sector, ti); sector_t offset, max_len; /* * Does the target need to split even further? */ if (ti->max_io_len) { offset = dm_target_offset(ti, sector); if (unlikely(ti->max_io_len & (ti->max_io_len - 1))) max_len = sector_div(offset, ti->max_io_len); else max_len = offset & (ti->max_io_len - 1); max_len = ti->max_io_len - max_len; if (len > max_len) len = max_len; } return len; } int dm_set_target_max_io_len(struct dm_target *ti, sector_t len) { if (len > UINT_MAX) { DMERR("Specified maximum size of target IO (%llu) exceeds limit (%u)", (unsigned long long)len, UINT_MAX); ti->error = "Maximum size of target IO is too large"; return -EINVAL; } ti->max_io_len = (uint32_t) len; return 0; } EXPORT_SYMBOL_GPL(dm_set_target_max_io_len); /* * A target may call dm_accept_partial_bio only from the map routine. It is * allowed for all bio types except REQ_FLUSH. * * dm_accept_partial_bio informs the dm that the target only wants to process * additional n_sectors sectors of the bio and the rest of the data should be * sent in a next bio. * * A diagram that explains the arithmetics: * +--------------------+---------------+-------+ * | 1 | 2 | 3 | * +--------------------+---------------+-------+ * * <-------------- *tio->len_ptr ---------------> * <------- bi_size -------> * <-- n_sectors --> * * Region 1 was already iterated over with bio_advance or similar function. * (it may be empty if the target doesn't use bio_advance) * Region 2 is the remaining bio size that the target wants to process. * (it may be empty if region 1 is non-empty, although there is no reason * to make it empty) * The target requires that region 3 is to be sent in the next bio. * * If the target wants to receive multiple copies of the bio (via num_*bios, etc), * the partially processed part (the sum of regions 1+2) must be the same for all * copies of the bio. */ void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors) { struct dm_target_io *tio = container_of(bio, struct dm_target_io, clone); unsigned bi_size = bio->bi_iter.bi_size >> SECTOR_SHIFT; BUG_ON(bio->bi_rw & REQ_FLUSH); BUG_ON(bi_size > *tio->len_ptr); BUG_ON(n_sectors > bi_size); *tio->len_ptr -= bi_size - n_sectors; bio->bi_iter.bi_size = n_sectors << SECTOR_SHIFT; } EXPORT_SYMBOL_GPL(dm_accept_partial_bio); static void __map_bio(struct dm_target_io *tio) { int r; sector_t sector; struct bio *clone = &tio->clone; struct dm_target *ti = tio->ti; clone->bi_end_io = clone_endio; /* * Map the clone. If r == 0 we don't need to do * anything, the target has assumed ownership of * this io. */ atomic_inc(&tio->io->io_count); sector = clone->bi_iter.bi_sector; r = ti->type->map(ti, clone); if (r == DM_MAPIO_REMAPPED) { /* the bio has been remapped so dispatch it */ trace_block_bio_remap(bdev_get_queue(clone->bi_bdev), clone, tio->io->bio->bi_bdev->bd_dev, sector); generic_make_request(clone); } else if (r < 0 || r == DM_MAPIO_REQUEUE) { /* error the io and bail out, or requeue it if needed */ dec_pending(tio->io, r); free_tio(tio); } else if (r != DM_MAPIO_SUBMITTED) { DMWARN("unimplemented target map return value: %d", r); BUG(); } } struct clone_info { struct mapped_device *md; struct dm_table *map; struct bio *bio; struct dm_io *io; sector_t sector; unsigned sector_count; }; static void bio_setup_sector(struct bio *bio, sector_t sector, unsigned len) { bio->bi_iter.bi_sector = sector; bio->bi_iter.bi_size = to_bytes(len); } /* * Creates a bio that consists of range of complete bvecs. */ static int clone_bio(struct dm_target_io *tio, struct bio *bio, sector_t sector, unsigned len) { struct bio *clone = &tio->clone; __bio_clone_fast(clone, bio); if (bio_integrity(bio)) { int r = bio_integrity_clone(clone, bio, GFP_NOIO); if (r < 0) return r; } bio_advance(clone, to_bytes(sector - clone->bi_iter.bi_sector)); clone->bi_iter.bi_size = to_bytes(len); if (bio_integrity(bio)) bio_integrity_trim(clone, 0, len); return 0; } static struct dm_target_io *alloc_tio(struct clone_info *ci, struct dm_target *ti, unsigned target_bio_nr) { struct dm_target_io *tio; struct bio *clone; clone = bio_alloc_bioset(GFP_NOIO, 0, ci->md->bs); tio = container_of(clone, struct dm_target_io, clone); tio->io = ci->io; tio->ti = ti; tio->target_bio_nr = target_bio_nr; return tio; } static void __clone_and_map_simple_bio(struct clone_info *ci, struct dm_target *ti, unsigned target_bio_nr, unsigned *len) { struct dm_target_io *tio = alloc_tio(ci, ti, target_bio_nr); struct bio *clone = &tio->clone; tio->len_ptr = len; __bio_clone_fast(clone, ci->bio); if (len) bio_setup_sector(clone, ci->sector, *len); __map_bio(tio); } static void __send_duplicate_bios(struct clone_info *ci, struct dm_target *ti, unsigned num_bios, unsigned *len) { unsigned target_bio_nr; for (target_bio_nr = 0; target_bio_nr < num_bios; target_bio_nr++) __clone_and_map_simple_bio(ci, ti, target_bio_nr, len); } static int __send_empty_flush(struct clone_info *ci) { unsigned target_nr = 0; struct dm_target *ti; BUG_ON(bio_has_data(ci->bio)); while ((ti = dm_table_get_target(ci->map, target_nr++))) __send_duplicate_bios(ci, ti, ti->num_flush_bios, NULL); return 0; } static int __clone_and_map_data_bio(struct clone_info *ci, struct dm_target *ti, sector_t sector, unsigned *len) { struct bio *bio = ci->bio; struct dm_target_io *tio; unsigned target_bio_nr; unsigned num_target_bios = 1; int r = 0; /* * Does the target want to receive duplicate copies of the bio? */ if (bio_data_dir(bio) == WRITE && ti->num_write_bios) num_target_bios = ti->num_write_bios(ti, bio); for (target_bio_nr = 0; target_bio_nr < num_target_bios; target_bio_nr++) { tio = alloc_tio(ci, ti, target_bio_nr); tio->len_ptr = len; r = clone_bio(tio, bio, sector, *len); if (r < 0) { free_tio(tio); break; } __map_bio(tio); } return r; } typedef unsigned (*get_num_bios_fn)(struct dm_target *ti); static unsigned get_num_discard_bios(struct dm_target *ti) { return ti->num_discard_bios; } static unsigned get_num_write_same_bios(struct dm_target *ti) { return ti->num_write_same_bios; } typedef bool (*is_split_required_fn)(struct dm_target *ti); static bool is_split_required_for_discard(struct dm_target *ti) { return ti->split_discard_bios; } static int __send_changing_extent_only(struct clone_info *ci, get_num_bios_fn get_num_bios, is_split_required_fn is_split_required) { struct dm_target *ti; unsigned len; unsigned num_bios; do { ti = dm_table_find_target(ci->map, ci->sector); if (!dm_target_is_valid(ti)) return -EIO; /* * Even though the device advertised support for this type of * request, that does not mean every target supports it, and * reconfiguration might also have changed that since the * check was performed. */ num_bios = get_num_bios ? get_num_bios(ti) : 0; if (!num_bios) return -EOPNOTSUPP; if (is_split_required && !is_split_required(ti)) len = min((sector_t)ci->sector_count, max_io_len_target_boundary(ci->sector, ti)); else len = min((sector_t)ci->sector_count, max_io_len(ci->sector, ti)); __send_duplicate_bios(ci, ti, num_bios, &len); ci->sector += len; } while (ci->sector_count -= len); return 0; } static int __send_discard(struct clone_info *ci) { return __send_changing_extent_only(ci, get_num_discard_bios, is_split_required_for_discard); } static int __send_write_same(struct clone_info *ci) { return __send_changing_extent_only(ci, get_num_write_same_bios, NULL); } /* * Select the correct strategy for processing a non-flush bio. */ static int __split_and_process_non_flush(struct clone_info *ci) { struct bio *bio = ci->bio; struct dm_target *ti; unsigned len; int r; if (unlikely(bio->bi_rw & REQ_DISCARD)) return __send_discard(ci); else if (unlikely(bio->bi_rw & REQ_WRITE_SAME)) return __send_write_same(ci); ti = dm_table_find_target(ci->map, ci->sector); if (!dm_target_is_valid(ti)) return -EIO; len = min_t(sector_t, max_io_len(ci->sector, ti), ci->sector_count); r = __clone_and_map_data_bio(ci, ti, ci->sector, &len); if (r < 0) return r; ci->sector += len; ci->sector_count -= len; return 0; } /* * Entry point to split a bio into clones and submit them to the targets. */ static void __split_and_process_bio(struct mapped_device *md, struct dm_table *map, struct bio *bio) { struct clone_info ci; int error = 0; if (unlikely(!map)) { bio_io_error(bio); return; } ci.map = map; ci.md = md; ci.io = alloc_io(md); ci.io->error = 0; atomic_set(&ci.io->io_count, 1); ci.io->bio = bio; ci.io->md = md; spin_lock_init(&ci.io->endio_lock); ci.sector = bio->bi_iter.bi_sector; start_io_acct(ci.io); if (bio->bi_rw & REQ_FLUSH) { ci.bio = &ci.md->flush_bio; ci.sector_count = 0; error = __send_empty_flush(&ci); /* dec_pending submits any data associated with flush */ } else { ci.bio = bio; ci.sector_count = bio_sectors(bio); while (ci.sector_count && !error) error = __split_and_process_non_flush(&ci); } /* drop the extra reference count */ dec_pending(ci.io, error); } /*----------------------------------------------------------------- * CRUD END *---------------------------------------------------------------*/ /* * The request function that just remaps the bio built up by * dm_merge_bvec. */ static blk_qc_t dm_make_request(struct request_queue *q, struct bio *bio) { int rw = bio_data_dir(bio); struct mapped_device *md = q->queuedata; int srcu_idx; struct dm_table *map; map = dm_get_live_table(md, &srcu_idx); generic_start_io_acct(rw, bio_sectors(bio), &dm_disk(md)->part0); /* if we're suspended, we have to queue this io for later */ if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags))) { dm_put_live_table(md, srcu_idx); if (bio_rw(bio) != READA) queue_io(md, bio); else bio_io_error(bio); return BLK_QC_T_NONE; } __split_and_process_bio(md, map, bio); dm_put_live_table(md, srcu_idx); return BLK_QC_T_NONE; } int dm_request_based(struct mapped_device *md) { return blk_queue_stackable(md->queue); } static void dm_dispatch_clone_request(struct request *clone, struct request *rq) { int r; if (blk_queue_io_stat(clone->q)) clone->cmd_flags |= REQ_IO_STAT; clone->start_time = jiffies; r = blk_insert_cloned_request(clone->q, clone); if (r) /* must complete clone in terms of original request */ dm_complete_request(rq, r); } static int dm_rq_bio_constructor(struct bio *bio, struct bio *bio_orig, void *data) { struct dm_rq_target_io *tio = data; struct dm_rq_clone_bio_info *info = container_of(bio, struct dm_rq_clone_bio_info, clone); info->orig = bio_orig; info->tio = tio; bio->bi_end_io = end_clone_bio; return 0; } static int setup_clone(struct request *clone, struct request *rq, struct dm_rq_target_io *tio, gfp_t gfp_mask) { int r; r = blk_rq_prep_clone(clone, rq, tio->md->bs, gfp_mask, dm_rq_bio_constructor, tio); if (r) return r; clone->cmd = rq->cmd; clone->cmd_len = rq->cmd_len; clone->sense = rq->sense; clone->end_io = end_clone_request; clone->end_io_data = tio; tio->clone = clone; return 0; } static struct request *clone_old_rq(struct request *rq, struct mapped_device *md, struct dm_rq_target_io *tio, gfp_t gfp_mask) { /* * Create clone for use with .request_fn request_queue */ struct request *clone; clone = alloc_old_clone_request(md, gfp_mask); if (!clone) return NULL; blk_rq_init(NULL, clone); if (setup_clone(clone, rq, tio, gfp_mask)) { /* -ENOMEM */ free_old_clone_request(md, clone); return NULL; } return clone; } static void map_tio_request(struct kthread_work *work); static void init_tio(struct dm_rq_target_io *tio, struct request *rq, struct mapped_device *md) { tio->md = md; tio->ti = NULL; tio->clone = NULL; tio->orig = rq; tio->error = 0; /* * Avoid initializing info for blk-mq; it passes * target-specific data through info.ptr * (see: dm_mq_init_request) */ if (!md->init_tio_pdu) memset(&tio->info, 0, sizeof(tio->info)); if (md->kworker_task) init_kthread_work(&tio->work, map_tio_request); } static struct dm_rq_target_io *dm_old_prep_tio(struct request *rq, struct mapped_device *md, gfp_t gfp_mask) { struct dm_rq_target_io *tio; int srcu_idx; struct dm_table *table; tio = alloc_old_rq_tio(md, gfp_mask); if (!tio) return NULL; init_tio(tio, rq, md); table = dm_get_live_table(md, &srcu_idx); /* * Must clone a request if this .request_fn DM device * is stacked on .request_fn device(s). */ if (!dm_table_mq_request_based(table)) { if (!clone_old_rq(rq, md, tio, gfp_mask)) { dm_put_live_table(md, srcu_idx); free_old_rq_tio(tio); return NULL; } } dm_put_live_table(md, srcu_idx); return tio; } /* * Called with the queue lock held. */ static int dm_old_prep_fn(struct request_queue *q, struct request *rq) { struct mapped_device *md = q->queuedata; struct dm_rq_target_io *tio; if (unlikely(rq->special)) { DMWARN("Already has something in rq->special."); return BLKPREP_KILL; } tio = dm_old_prep_tio(rq, md, GFP_ATOMIC); if (!tio) return BLKPREP_DEFER; rq->special = tio; rq->cmd_flags |= REQ_DONTPREP; return BLKPREP_OK; } /* * Returns: * 0 : the request has been processed * DM_MAPIO_REQUEUE : the original request needs to be requeued * < 0 : the request was completed due to failure */ static int map_request(struct dm_rq_target_io *tio, struct request *rq, struct mapped_device *md) { int r; struct dm_target *ti = tio->ti; struct request *clone = NULL; if (tio->clone) { clone = tio->clone; r = ti->type->map_rq(ti, clone, &tio->info); } else { r = ti->type->clone_and_map_rq(ti, rq, &tio->info, &clone); if (r < 0) { /* The target wants to complete the I/O */ dm_kill_unmapped_request(rq, r); return r; } if (r != DM_MAPIO_REMAPPED) return r; if (setup_clone(clone, rq, tio, GFP_ATOMIC)) { /* -ENOMEM */ ti->type->release_clone_rq(clone); return DM_MAPIO_REQUEUE; } } switch (r) { case DM_MAPIO_SUBMITTED: /* The target has taken the I/O to submit by itself later */ break; case DM_MAPIO_REMAPPED: /* The target has remapped the I/O so dispatch it */ trace_block_rq_remap(clone->q, clone, disk_devt(dm_disk(md)), blk_rq_pos(rq)); dm_dispatch_clone_request(clone, rq); break; case DM_MAPIO_REQUEUE: /* The target wants to requeue the I/O */ dm_requeue_original_request(md, tio->orig); break; default: if (r > 0) { DMWARN("unimplemented target map return value: %d", r); BUG(); } /* The target wants to complete the I/O */ dm_kill_unmapped_request(rq, r); return r; } return 0; } static void map_tio_request(struct kthread_work *work) { struct dm_rq_target_io *tio = container_of(work, struct dm_rq_target_io, work); struct request *rq = tio->orig; struct mapped_device *md = tio->md; if (map_request(tio, rq, md) == DM_MAPIO_REQUEUE) dm_requeue_original_request(md, rq); } static void dm_start_request(struct mapped_device *md, struct request *orig) { if (!orig->q->mq_ops) blk_start_request(orig); else blk_mq_start_request(orig); atomic_inc(&md->pending[rq_data_dir(orig)]); if (md->seq_rq_merge_deadline_usecs) { md->last_rq_pos = rq_end_sector(orig); md->last_rq_rw = rq_data_dir(orig); md->last_rq_start_time = ktime_get(); } if (unlikely(dm_stats_used(&md->stats))) { struct dm_rq_target_io *tio = tio_from_request(orig); tio->duration_jiffies = jiffies; tio->n_sectors = blk_rq_sectors(orig); dm_stats_account_io(&md->stats, orig->cmd_flags, blk_rq_pos(orig), tio->n_sectors, false, 0, &tio->stats_aux); } /* * Hold the md reference here for the in-flight I/O. * We can't rely on the reference count by device opener, * because the device may be closed during the request completion * when all bios are completed. * See the comment in rq_completed() too. */ dm_get(md); } #define MAX_SEQ_RQ_MERGE_DEADLINE_USECS 100000 ssize_t dm_attr_rq_based_seq_io_merge_deadline_show(struct mapped_device *md, char *buf) { return sprintf(buf, "%u\n", md->seq_rq_merge_deadline_usecs); } ssize_t dm_attr_rq_based_seq_io_merge_deadline_store(struct mapped_device *md, const char *buf, size_t count) { unsigned deadline; if (!dm_request_based(md) || md->use_blk_mq) return count; if (kstrtouint(buf, 10, &deadline)) return -EINVAL; if (deadline > MAX_SEQ_RQ_MERGE_DEADLINE_USECS) deadline = MAX_SEQ_RQ_MERGE_DEADLINE_USECS; md->seq_rq_merge_deadline_usecs = deadline; return count; } static bool dm_request_peeked_before_merge_deadline(struct mapped_device *md) { ktime_t kt_deadline; if (!md->seq_rq_merge_deadline_usecs) return false; kt_deadline = ns_to_ktime((u64)md->seq_rq_merge_deadline_usecs * NSEC_PER_USEC); kt_deadline = ktime_add_safe(md->last_rq_start_time, kt_deadline); return !ktime_after(ktime_get(), kt_deadline); } /* * q->request_fn for request-based dm. * Called with the queue lock held. */ static void dm_request_fn(struct request_queue *q) { struct mapped_device *md = q->queuedata; struct dm_target *ti = md->immutable_target; struct request *rq; struct dm_rq_target_io *tio; sector_t pos = 0; if (unlikely(!ti)) { int srcu_idx; struct dm_table *map = dm_get_live_table(md, &srcu_idx); ti = dm_table_find_target(map, pos); dm_put_live_table(md, srcu_idx); } /* * For suspend, check blk_queue_stopped() and increment * ->pending within a single queue_lock not to increment the * number of in-flight I/Os after the queue is stopped in * dm_suspend(). */ while (!blk_queue_stopped(q)) { rq = blk_peek_request(q); if (!rq) return; /* always use block 0 to find the target for flushes for now */ pos = 0; if (!(rq->cmd_flags & REQ_FLUSH)) pos = blk_rq_pos(rq); if ((dm_request_peeked_before_merge_deadline(md) && md_in_flight(md) && rq->bio && rq->bio->bi_vcnt == 1 && md->last_rq_pos == pos && md->last_rq_rw == rq_data_dir(rq)) || (ti->type->busy && ti->type->busy(ti))) { blk_delay_queue(q, HZ / 100); return; } dm_start_request(md, rq); tio = tio_from_request(rq); /* Establish tio->ti before queuing work (map_tio_request) */ tio->ti = ti; queue_kthread_work(&md->kworker, &tio->work); BUG_ON(!irqs_disabled()); } } static int dm_any_congested(void *congested_data, int bdi_bits) { int r = bdi_bits; struct mapped_device *md = congested_data; struct dm_table *map; if (!test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) { if (dm_request_based(md)) { /* * With request-based DM we only need to check the * top-level queue for congestion. */ r = md->queue->backing_dev_info.wb.state & bdi_bits; } else { map = dm_get_live_table_fast(md); if (map) r = dm_table_any_congested(map, bdi_bits); dm_put_live_table_fast(md); } } return r; } /*----------------------------------------------------------------- * An IDR is used to keep track of allocated minor numbers. *---------------------------------------------------------------*/ static void free_minor(int minor) { spin_lock(&_minor_lock); idr_remove(&_minor_idr, minor); spin_unlock(&_minor_lock); } /* * See if the device with a specific minor # is free. */ static int specific_minor(int minor) { int r; if (minor >= (1 << MINORBITS)) return -EINVAL; idr_preload(GFP_KERNEL); spin_lock(&_minor_lock); r = idr_alloc(&_minor_idr, MINOR_ALLOCED, minor, minor + 1, GFP_NOWAIT); spin_unlock(&_minor_lock); idr_preload_end(); if (r < 0) return r == -ENOSPC ? -EBUSY : r; return 0; } static int next_free_minor(int *minor) { int r; idr_preload(GFP_KERNEL); spin_lock(&_minor_lock); r = idr_alloc(&_minor_idr, MINOR_ALLOCED, 0, 1 << MINORBITS, GFP_NOWAIT); spin_unlock(&_minor_lock); idr_preload_end(); if (r < 0) return r; *minor = r; return 0; } static const struct block_device_operations dm_blk_dops; static void dm_wq_work(struct work_struct *work); static void dm_init_md_queue(struct mapped_device *md) { /* * Request-based dm devices cannot be stacked on top of bio-based dm * devices. The type of this dm device may not have been decided yet. * The type is decided at the first table loading time. * To prevent problematic device stacking, clear the queue flag * for request stacking support until then. * * This queue is new, so no concurrency on the queue_flags. */ queue_flag_clear_unlocked(QUEUE_FLAG_STACKABLE, md->queue); /* * Initialize data that will only be used by a non-blk-mq DM queue * - must do so here (in alloc_dev callchain) before queue is used */ md->queue->queuedata = md; md->queue->backing_dev_info.congested_data = md; } static void dm_init_normal_md_queue(struct mapped_device *md) { md->use_blk_mq = false; dm_init_md_queue(md); /* * Initialize aspects of queue that aren't relevant for blk-mq */ md->queue->backing_dev_info.congested_fn = dm_any_congested; blk_queue_bounce_limit(md->queue, BLK_BOUNCE_ANY); } static void cleanup_mapped_device(struct mapped_device *md) { if (md->wq) destroy_workqueue(md->wq); if (md->kworker_task) kthread_stop(md->kworker_task); mempool_destroy(md->io_pool); mempool_destroy(md->rq_pool); if (md->bs) bioset_free(md->bs); cleanup_srcu_struct(&md->io_barrier); if (md->disk) { spin_lock(&_minor_lock); md->disk->private_data = NULL; spin_unlock(&_minor_lock); del_gendisk(md->disk); put_disk(md->disk); } if (md->queue) blk_cleanup_queue(md->queue); if (md->bdev) { bdput(md->bdev); md->bdev = NULL; } } /* * Allocate and initialise a blank device with a given minor. */ static struct mapped_device *alloc_dev(int minor) { int r, numa_node_id = dm_get_numa_node(); struct mapped_device *md; void *old_md; md = kzalloc_node(sizeof(*md), GFP_KERNEL, numa_node_id); if (!md) { DMWARN("unable to allocate device, out of memory."); return NULL; } if (!try_module_get(THIS_MODULE)) goto bad_module_get; /* get a minor number for the dev */ if (minor == DM_ANY_MINOR) r = next_free_minor(&minor); else r = specific_minor(minor); if (r < 0) goto bad_minor; r = init_srcu_struct(&md->io_barrier); if (r < 0) goto bad_io_barrier; md->numa_node_id = numa_node_id; md->use_blk_mq = use_blk_mq; md->init_tio_pdu = false; md->type = DM_TYPE_NONE; mutex_init(&md->suspend_lock); mutex_init(&md->type_lock); mutex_init(&md->table_devices_lock); spin_lock_init(&md->deferred_lock); atomic_set(&md->holders, 1); atomic_set(&md->open_count, 0); atomic_set(&md->event_nr, 0); atomic_set(&md->uevent_seq, 0); INIT_LIST_HEAD(&md->uevent_list); INIT_LIST_HEAD(&md->table_devices); spin_lock_init(&md->uevent_lock); md->queue = blk_alloc_queue_node(GFP_KERNEL, numa_node_id); if (!md->queue) goto bad; dm_init_md_queue(md); md->disk = alloc_disk_node(1, numa_node_id); if (!md->disk) goto bad; atomic_set(&md->pending[0], 0); atomic_set(&md->pending[1], 0); init_waitqueue_head(&md->wait); INIT_WORK(&md->work, dm_wq_work); init_waitqueue_head(&md->eventq); init_completion(&md->kobj_holder.completion); md->kworker_task = NULL; md->disk->major = _major; md->disk->first_minor = minor; md->disk->fops = &dm_blk_dops; md->disk->queue = md->queue; md->disk->private_data = md; sprintf(md->disk->disk_name, "dm-%d", minor); add_disk(md->disk); format_dev_t(md->name, MKDEV(_major, minor)); md->wq = alloc_workqueue("kdmflush", WQ_MEM_RECLAIM, 0); if (!md->wq) goto bad; md->bdev = bdget_disk(md->disk, 0); if (!md->bdev) goto bad; bio_init(&md->flush_bio); md->flush_bio.bi_bdev = md->bdev; md->flush_bio.bi_rw = WRITE_FLUSH; dm_stats_init(&md->stats); /* Populate the mapping, nobody knows we exist yet */ spin_lock(&_minor_lock); old_md = idr_replace(&_minor_idr, md, minor); spin_unlock(&_minor_lock); BUG_ON(old_md != MINOR_ALLOCED); return md; bad: cleanup_mapped_device(md); bad_io_barrier: free_minor(minor); bad_minor: module_put(THIS_MODULE); bad_module_get: kfree(md); return NULL; } static void unlock_fs(struct mapped_device *md); static void free_dev(struct mapped_device *md) { int minor = MINOR(disk_devt(md->disk)); unlock_fs(md); cleanup_mapped_device(md); if (md->tag_set) { blk_mq_free_tag_set(md->tag_set); kfree(md->tag_set); } free_table_devices(&md->table_devices); dm_stats_cleanup(&md->stats); free_minor(minor); module_put(THIS_MODULE); kfree(md); } static void __bind_mempools(struct mapped_device *md, struct dm_table *t) { struct dm_md_mempools *p = dm_table_get_md_mempools(t); if (md->bs) { /* The md already has necessary mempools. */ if (dm_table_get_type(t) == DM_TYPE_BIO_BASED) { /* * Reload bioset because front_pad may have changed * because a different table was loaded. */ bioset_free(md->bs); md->bs = p->bs; p->bs = NULL; } /* * There's no need to reload with request-based dm * because the size of front_pad doesn't change. * Note for future: If you are to reload bioset, * prep-ed requests in the queue may refer * to bio from the old bioset, so you must walk * through the queue to unprep. */ goto out; } BUG_ON(!p || md->io_pool || md->rq_pool || md->bs); md->io_pool = p->io_pool; p->io_pool = NULL; md->rq_pool = p->rq_pool; p->rq_pool = NULL; md->bs = p->bs; p->bs = NULL; out: /* mempool bind completed, no longer need any mempools in the table */ dm_table_free_md_mempools(t); } /* * Bind a table to the device. */ static void event_callback(void *context) { unsigned long flags; LIST_HEAD(uevents); struct mapped_device *md = (struct mapped_device *) context; spin_lock_irqsave(&md->uevent_lock, flags); list_splice_init(&md->uevent_list, &uevents); spin_unlock_irqrestore(&md->uevent_lock, flags); dm_send_uevents(&uevents, &disk_to_dev(md->disk)->kobj); atomic_inc(&md->event_nr); wake_up(&md->eventq); } /* * Protected by md->suspend_lock obtained by dm_swap_table(). */ static void __set_size(struct mapped_device *md, sector_t size) { set_capacity(md->disk, size); i_size_write(md->bdev->bd_inode, (loff_t)size << SECTOR_SHIFT); } /* * Returns old map, which caller must destroy. */ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t, struct queue_limits *limits) { struct dm_table *old_map; struct request_queue *q = md->queue; sector_t size; size = dm_table_get_size(t); /* * Wipe any geometry if the size of the table changed. */ if (size != dm_get_size(md)) memset(&md->geometry, 0, sizeof(md->geometry)); __set_size(md, size); dm_table_event_callback(t, event_callback, md); /* * The queue hasn't been stopped yet, if the old table type wasn't * for request-based during suspension. So stop it to prevent * I/O mapping before resume. * This must be done before setting the queue restrictions, * because request-based dm may be run just after the setting. */ if (dm_table_request_based(t)) { dm_stop_queue(q); /* * Leverage the fact that request-based DM targets are * immutable singletons and establish md->immutable_target * - used to optimize both dm_request_fn and dm_mq_queue_rq */ md->immutable_target = dm_table_get_immutable_target(t); } __bind_mempools(md, t); old_map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock)); rcu_assign_pointer(md->map, (void *)t); md->immutable_target_type = dm_table_get_immutable_target_type(t); dm_table_set_restrictions(t, q, limits); if (old_map) dm_sync_table(md); return old_map; } /* * Returns unbound table for the caller to free. */ static struct dm_table *__unbind(struct mapped_device *md) { struct dm_table *map = rcu_dereference_protected(md->map, 1); if (!map) return NULL; dm_table_event_callback(map, NULL, NULL); RCU_INIT_POINTER(md->map, NULL); dm_sync_table(md); return map; } /* * Constructor for a new device. */ int dm_create(int minor, struct mapped_device **result) { struct mapped_device *md; md = alloc_dev(minor); if (!md) return -ENXIO; dm_sysfs_init(md); *result = md; return 0; } /* * Functions to manage md->type. * All are required to hold md->type_lock. */ void dm_lock_md_type(struct mapped_device *md) { mutex_lock(&md->type_lock); } void dm_unlock_md_type(struct mapped_device *md) { mutex_unlock(&md->type_lock); } void dm_set_md_type(struct mapped_device *md, unsigned type) { BUG_ON(!mutex_is_locked(&md->type_lock)); md->type = type; } unsigned dm_get_md_type(struct mapped_device *md) { return md->type; } struct target_type *dm_get_immutable_target_type(struct mapped_device *md) { return md->immutable_target_type; } /* * The queue_limits are only valid as long as you have a reference * count on 'md'. */ struct queue_limits *dm_get_queue_limits(struct mapped_device *md) { BUG_ON(!atomic_read(&md->holders)); return &md->queue->limits; } EXPORT_SYMBOL_GPL(dm_get_queue_limits); static void dm_old_init_rq_based_worker_thread(struct mapped_device *md) { /* Initialize the request-based DM worker thread */ init_kthread_worker(&md->kworker); md->kworker_task = kthread_run(kthread_worker_fn, &md->kworker, "kdmwork-%s", dm_device_name(md)); } /* * Fully initialize a .request_fn request-based queue. */ static int dm_old_init_request_queue(struct mapped_device *md) { /* Fully initialize the queue */ if (!blk_init_allocated_queue(md->queue, dm_request_fn, NULL)) return -EINVAL; /* disable dm_request_fn's merge heuristic by default */ md->seq_rq_merge_deadline_usecs = 0; dm_init_normal_md_queue(md); blk_queue_softirq_done(md->queue, dm_softirq_done); blk_queue_prep_rq(md->queue, dm_old_prep_fn); dm_old_init_rq_based_worker_thread(md); elv_register_queue(md->queue); return 0; } static int dm_mq_init_request(void *data, struct request *rq, unsigned int hctx_idx, unsigned int request_idx, unsigned int numa_node) { struct mapped_device *md = data; struct dm_rq_target_io *tio = blk_mq_rq_to_pdu(rq); /* * Must initialize md member of tio, otherwise it won't * be available in dm_mq_queue_rq. */ tio->md = md; if (md->init_tio_pdu) { /* target-specific per-io data is immediately after the tio */ tio->info.ptr = tio + 1; } return 0; } static int dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx, const struct blk_mq_queue_data *bd) { struct request *rq = bd->rq; struct dm_rq_target_io *tio = blk_mq_rq_to_pdu(rq); struct mapped_device *md = tio->md; struct dm_target *ti = md->immutable_target; if (unlikely(!ti)) { int srcu_idx; struct dm_table *map = dm_get_live_table(md, &srcu_idx); ti = dm_table_find_target(map, 0); dm_put_live_table(md, srcu_idx); } if (ti->type->busy && ti->type->busy(ti)) return BLK_MQ_RQ_QUEUE_BUSY; dm_start_request(md, rq); /* Init tio using md established in .init_request */ init_tio(tio, rq, md); /* * Establish tio->ti before queuing work (map_tio_request) * or making direct call to map_request(). */ tio->ti = ti; /* Direct call is fine since .queue_rq allows allocations */ if (map_request(tio, rq, md) == DM_MAPIO_REQUEUE) { /* Undo dm_start_request() before requeuing */ rq_end_stats(md, rq); rq_completed(md, rq_data_dir(rq), false); return BLK_MQ_RQ_QUEUE_BUSY; } return BLK_MQ_RQ_QUEUE_OK; } static struct blk_mq_ops dm_mq_ops = { .queue_rq = dm_mq_queue_rq, .map_queue = blk_mq_map_queue, .complete = dm_softirq_done, .init_request = dm_mq_init_request, }; static int dm_mq_init_request_queue(struct mapped_device *md, struct dm_target *immutable_tgt) { struct request_queue *q; int err; if (dm_get_md_type(md) == DM_TYPE_REQUEST_BASED) { DMERR("request-based dm-mq may only be stacked on blk-mq device(s)"); return -EINVAL; } md->tag_set = kzalloc_node(sizeof(struct blk_mq_tag_set), GFP_KERNEL, md->numa_node_id); if (!md->tag_set) return -ENOMEM; md->tag_set->ops = &dm_mq_ops; md->tag_set->queue_depth = dm_get_blk_mq_queue_depth(); md->tag_set->numa_node = md->numa_node_id; md->tag_set->flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE; md->tag_set->nr_hw_queues = dm_get_blk_mq_nr_hw_queues(); md->tag_set->driver_data = md; md->tag_set->cmd_size = sizeof(struct dm_rq_target_io); if (immutable_tgt && immutable_tgt->per_io_data_size) { /* any target-specific per-io data is immediately after the tio */ md->tag_set->cmd_size += immutable_tgt->per_io_data_size; md->init_tio_pdu = true; } err = blk_mq_alloc_tag_set(md->tag_set); if (err) goto out_kfree_tag_set; q = blk_mq_init_allocated_queue(md->tag_set, md->queue); if (IS_ERR(q)) { err = PTR_ERR(q); goto out_tag_set; } dm_init_md_queue(md); /* backfill 'mq' sysfs registration normally done in blk_register_queue */ blk_mq_register_disk(md->disk); return 0; out_tag_set: blk_mq_free_tag_set(md->tag_set); out_kfree_tag_set: kfree(md->tag_set); return err; } static unsigned filter_md_type(unsigned type, struct mapped_device *md) { if (type == DM_TYPE_BIO_BASED) return type; return !md->use_blk_mq ? DM_TYPE_REQUEST_BASED : DM_TYPE_MQ_REQUEST_BASED; } /* * Setup the DM device's queue based on md's type */ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t) { int r; unsigned md_type = filter_md_type(dm_get_md_type(md), md); switch (md_type) { case DM_TYPE_REQUEST_BASED: r = dm_old_init_request_queue(md); if (r) { DMERR("Cannot initialize queue for request-based mapped device"); return r; } break; case DM_TYPE_MQ_REQUEST_BASED: r = dm_mq_init_request_queue(md, dm_table_get_immutable_target(t)); if (r) { DMERR("Cannot initialize queue for request-based dm-mq mapped device"); return r; } break; case DM_TYPE_BIO_BASED: dm_init_normal_md_queue(md); blk_queue_make_request(md->queue, dm_make_request); /* * DM handles splitting bios as needed. Free the bio_split bioset * since it won't be used (saves 1 process per bio-based DM device). */ bioset_free(md->queue->bio_split); md->queue->bio_split = NULL; break; } return 0; } struct mapped_device *dm_get_md(dev_t dev) { struct mapped_device *md; unsigned minor = MINOR(dev); if (MAJOR(dev) != _major || minor >= (1 << MINORBITS)) return NULL; spin_lock(&_minor_lock); md = idr_find(&_minor_idr, minor); if (md) { if ((md == MINOR_ALLOCED || (MINOR(disk_devt(dm_disk(md))) != minor) || dm_deleting_md(md) || test_bit(DMF_FREEING, &md->flags))) { md = NULL; goto out; } dm_get(md); } out: spin_unlock(&_minor_lock); return md; } EXPORT_SYMBOL_GPL(dm_get_md); void *dm_get_mdptr(struct mapped_device *md) { return md->interface_ptr; } void dm_set_mdptr(struct mapped_device *md, void *ptr) { md->interface_ptr = ptr; } void dm_get(struct mapped_device *md) { atomic_inc(&md->holders); BUG_ON(test_bit(DMF_FREEING, &md->flags)); } int dm_hold(struct mapped_device *md) { spin_lock(&_minor_lock); if (test_bit(DMF_FREEING, &md->flags)) { spin_unlock(&_minor_lock); return -EBUSY; } dm_get(md); spin_unlock(&_minor_lock); return 0; } EXPORT_SYMBOL_GPL(dm_hold); const char *dm_device_name(struct mapped_device *md) { return md->name; } EXPORT_SYMBOL_GPL(dm_device_name); static void __dm_destroy(struct mapped_device *md, bool wait) { struct dm_table *map; int srcu_idx; might_sleep(); spin_lock(&_minor_lock); idr_replace(&_minor_idr, MINOR_ALLOCED, MINOR(disk_devt(dm_disk(md)))); set_bit(DMF_FREEING, &md->flags); spin_unlock(&_minor_lock); if (dm_request_based(md) && md->kworker_task) flush_kthread_worker(&md->kworker); /* * Take suspend_lock so that presuspend and postsuspend methods * do not race with internal suspend. */ mutex_lock(&md->suspend_lock); map = dm_get_live_table(md, &srcu_idx); if (!dm_suspended_md(md)) { dm_table_presuspend_targets(map); dm_table_postsuspend_targets(map); } /* dm_put_live_table must be before msleep, otherwise deadlock is possible */ dm_put_live_table(md, srcu_idx); mutex_unlock(&md->suspend_lock); /* * Rare, but there may be I/O requests still going to complete, * for example. Wait for all references to disappear. * No one should increment the reference count of the mapped_device, * after the mapped_device state becomes DMF_FREEING. */ if (wait) while (atomic_read(&md->holders)) msleep(1); else if (atomic_read(&md->holders)) DMWARN("%s: Forcibly removing mapped_device still in use! (%d users)", dm_device_name(md), atomic_read(&md->holders)); dm_sysfs_exit(md); dm_table_destroy(__unbind(md)); free_dev(md); } void dm_destroy(struct mapped_device *md) { __dm_destroy(md, true); } void dm_destroy_immediate(struct mapped_device *md) { __dm_destroy(md, false); } void dm_put(struct mapped_device *md) { atomic_dec(&md->holders); } EXPORT_SYMBOL_GPL(dm_put); static int dm_wait_for_completion(struct mapped_device *md, int interruptible) { int r = 0; DECLARE_WAITQUEUE(wait, current); add_wait_queue(&md->wait, &wait); while (1) { set_current_state(interruptible); if (!md_in_flight(md)) break; if (interruptible == TASK_INTERRUPTIBLE && signal_pending(current)) { r = -EINTR; break; } io_schedule(); } set_current_state(TASK_RUNNING); remove_wait_queue(&md->wait, &wait); return r; } /* * Process the deferred bios */ static void dm_wq_work(struct work_struct *work) { struct mapped_device *md = container_of(work, struct mapped_device, work); struct bio *c; int srcu_idx; struct dm_table *map; map = dm_get_live_table(md, &srcu_idx); while (!test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) { spin_lock_irq(&md->deferred_lock); c = bio_list_pop(&md->deferred); spin_unlock_irq(&md->deferred_lock); if (!c) break; if (dm_request_based(md)) generic_make_request(c); else __split_and_process_bio(md, map, c); } dm_put_live_table(md, srcu_idx); } static void dm_queue_flush(struct mapped_device *md) { clear_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags); smp_mb__after_atomic(); queue_work(md->wq, &md->work); } /* * Swap in a new table, returning the old one for the caller to destroy. */ struct dm_table *dm_swap_table(struct mapped_device *md, struct dm_table *table) { struct dm_table *live_map = NULL, *map = ERR_PTR(-EINVAL); struct queue_limits limits; int r; mutex_lock(&md->suspend_lock); /* device must be suspended */ if (!dm_suspended_md(md)) goto out; /* * If the new table has no data devices, retain the existing limits. * This helps multipath with queue_if_no_path if all paths disappear, * then new I/O is queued based on these limits, and then some paths * reappear. */ if (dm_table_has_no_data_devices(table)) { live_map = dm_get_live_table_fast(md); if (live_map) limits = md->queue->limits; dm_put_live_table_fast(md); } if (!live_map) { r = dm_calculate_queue_limits(table, &limits); if (r) { map = ERR_PTR(r); goto out; } } map = __bind(md, table, &limits); out: mutex_unlock(&md->suspend_lock); return map; } /* * Functions to lock and unlock any filesystem running on the * device. */ static int lock_fs(struct mapped_device *md) { int r; WARN_ON(md->frozen_sb); md->frozen_sb = freeze_bdev(md->bdev); if (IS_ERR(md->frozen_sb)) { r = PTR_ERR(md->frozen_sb); md->frozen_sb = NULL; return r; } set_bit(DMF_FROZEN, &md->flags); return 0; } static void unlock_fs(struct mapped_device *md) { if (!test_bit(DMF_FROZEN, &md->flags)) return; thaw_bdev(md->bdev, md->frozen_sb); md->frozen_sb = NULL; clear_bit(DMF_FROZEN, &md->flags); } /* * If __dm_suspend returns 0, the device is completely quiescent * now. There is no request-processing activity. All new requests * are being added to md->deferred list. * * Caller must hold md->suspend_lock */ static int __dm_suspend(struct mapped_device *md, struct dm_table *map, unsigned suspend_flags, int interruptible) { bool do_lockfs = suspend_flags & DM_SUSPEND_LOCKFS_FLAG; bool noflush = suspend_flags & DM_SUSPEND_NOFLUSH_FLAG; int r; /* * DMF_NOFLUSH_SUSPENDING must be set before presuspend. * This flag is cleared before dm_suspend returns. */ if (noflush) set_bit(DMF_NOFLUSH_SUSPENDING, &md->flags); /* * This gets reverted if there's an error later and the targets * provide the .presuspend_undo hook. */ dm_table_presuspend_targets(map); /* * Flush I/O to the device. * Any I/O submitted after lock_fs() may not be flushed. * noflush takes precedence over do_lockfs. * (lock_fs() flushes I/Os and waits for them to complete.) */ if (!noflush && do_lockfs) { r = lock_fs(md); if (r) { dm_table_presuspend_undo_targets(map); return r; } } /* * Here we must make sure that no processes are submitting requests * to target drivers i.e. no one may be executing * __split_and_process_bio. This is called from dm_request and * dm_wq_work. * * To get all processes out of __split_and_process_bio in dm_request, * we take the write lock. To prevent any process from reentering * __split_and_process_bio from dm_request and quiesce the thread * (dm_wq_work), we set BMF_BLOCK_IO_FOR_SUSPEND and call * flush_workqueue(md->wq). */ set_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags); if (map) synchronize_srcu(&md->io_barrier); /* * Stop md->queue before flushing md->wq in case request-based * dm defers requests to md->wq from md->queue. */ if (dm_request_based(md)) { dm_stop_queue(md->queue); if (md->kworker_task) flush_kthread_worker(&md->kworker); } flush_workqueue(md->wq); /* * At this point no more requests are entering target request routines. * We call dm_wait_for_completion to wait for all existing requests * to finish. */ r = dm_wait_for_completion(md, interruptible); if (noflush) clear_bit(DMF_NOFLUSH_SUSPENDING, &md->flags); if (map) synchronize_srcu(&md->io_barrier); /* were we interrupted ? */ if (r < 0) { dm_queue_flush(md); if (dm_request_based(md)) dm_start_queue(md->queue); unlock_fs(md); dm_table_presuspend_undo_targets(map); /* pushback list is already flushed, so skip flush */ } return r; } /* * We need to be able to change a mapping table under a mounted * filesystem. For example we might want to move some data in * the background. Before the table can be swapped with * dm_bind_table, dm_suspend must be called to flush any in * flight bios and ensure that any further io gets deferred. */ /* * Suspend mechanism in request-based dm. * * 1. Flush all I/Os by lock_fs() if needed. * 2. Stop dispatching any I/O by stopping the request_queue. * 3. Wait for all in-flight I/Os to be completed or requeued. * * To abort suspend, start the request_queue. */ int dm_suspend(struct mapped_device *md, unsigned suspend_flags) { struct dm_table *map = NULL; int r = 0; retry: mutex_lock_nested(&md->suspend_lock, SINGLE_DEPTH_NESTING); if (dm_suspended_md(md)) { r = -EINVAL; goto out_unlock; } if (dm_suspended_internally_md(md)) { /* already internally suspended, wait for internal resume */ mutex_unlock(&md->suspend_lock); r = wait_on_bit(&md->flags, DMF_SUSPENDED_INTERNALLY, TASK_INTERRUPTIBLE); if (r) return r; goto retry; } map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock)); r = __dm_suspend(md, map, suspend_flags, TASK_INTERRUPTIBLE); if (r) goto out_unlock; set_bit(DMF_SUSPENDED, &md->flags); dm_table_postsuspend_targets(map); out_unlock: mutex_unlock(&md->suspend_lock); return r; } static int __dm_resume(struct mapped_device *md, struct dm_table *map) { if (map) { int r = dm_table_resume_targets(map); if (r) return r; } dm_queue_flush(md); /* * Flushing deferred I/Os must be done after targets are resumed * so that mapping of targets can work correctly. * Request-based dm is queueing the deferred I/Os in its request_queue. */ if (dm_request_based(md)) dm_start_queue(md->queue); unlock_fs(md); return 0; } int dm_resume(struct mapped_device *md) { int r = -EINVAL; struct dm_table *map = NULL; retry: mutex_lock_nested(&md->suspend_lock, SINGLE_DEPTH_NESTING); if (!dm_suspended_md(md)) goto out; if (dm_suspended_internally_md(md)) { /* already internally suspended, wait for internal resume */ mutex_unlock(&md->suspend_lock); r = wait_on_bit(&md->flags, DMF_SUSPENDED_INTERNALLY, TASK_INTERRUPTIBLE); if (r) return r; goto retry; } map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock)); if (!map || !dm_table_get_size(map)) goto out; r = __dm_resume(md, map); if (r) goto out; clear_bit(DMF_SUSPENDED, &md->flags); r = 0; out: mutex_unlock(&md->suspend_lock); return r; } /* * Internal suspend/resume works like userspace-driven suspend. It waits * until all bios finish and prevents issuing new bios to the target drivers. * It may be used only from the kernel. */ static void __dm_internal_suspend(struct mapped_device *md, unsigned suspend_flags) { struct dm_table *map = NULL; if (md->internal_suspend_count++) return; /* nested internal suspend */ if (dm_suspended_md(md)) { set_bit(DMF_SUSPENDED_INTERNALLY, &md->flags); return; /* nest suspend */ } map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock)); /* * Using TASK_UNINTERRUPTIBLE because only NOFLUSH internal suspend is * supported. Properly supporting a TASK_INTERRUPTIBLE internal suspend * would require changing .presuspend to return an error -- avoid this * until there is a need for more elaborate variants of internal suspend. */ (void) __dm_suspend(md, map, suspend_flags, TASK_UNINTERRUPTIBLE); set_bit(DMF_SUSPENDED_INTERNALLY, &md->flags); dm_table_postsuspend_targets(map); } static void __dm_internal_resume(struct mapped_device *md) { BUG_ON(!md->internal_suspend_count); if (--md->internal_suspend_count) return; /* resume from nested internal suspend */ if (dm_suspended_md(md)) goto done; /* resume from nested suspend */ /* * NOTE: existing callers don't need to call dm_table_resume_targets * (which may fail -- so best to avoid it for now by passing NULL map) */ (void) __dm_resume(md, NULL); done: clear_bit(DMF_SUSPENDED_INTERNALLY, &md->flags); smp_mb__after_atomic(); wake_up_bit(&md->flags, DMF_SUSPENDED_INTERNALLY); } void dm_internal_suspend_noflush(struct mapped_device *md) { mutex_lock(&md->suspend_lock); __dm_internal_suspend(md, DM_SUSPEND_NOFLUSH_FLAG); mutex_unlock(&md->suspend_lock); } EXPORT_SYMBOL_GPL(dm_internal_suspend_noflush); void dm_internal_resume(struct mapped_device *md) { mutex_lock(&md->suspend_lock); __dm_internal_resume(md); mutex_unlock(&md->suspend_lock); } EXPORT_SYMBOL_GPL(dm_internal_resume); /* * Fast variants of internal suspend/resume hold md->suspend_lock, * which prevents interaction with userspace-driven suspend. */ void dm_internal_suspend_fast(struct mapped_device *md) { mutex_lock(&md->suspend_lock); if (dm_suspended_md(md) || dm_suspended_internally_md(md)) return; set_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags); synchronize_srcu(&md->io_barrier); flush_workqueue(md->wq); dm_wait_for_completion(md, TASK_UNINTERRUPTIBLE); } EXPORT_SYMBOL_GPL(dm_internal_suspend_fast); void dm_internal_resume_fast(struct mapped_device *md) { if (dm_suspended_md(md) || dm_suspended_internally_md(md)) goto done; dm_queue_flush(md); done: mutex_unlock(&md->suspend_lock); } EXPORT_SYMBOL_GPL(dm_internal_resume_fast); /*----------------------------------------------------------------- * Event notification. *---------------------------------------------------------------*/ int dm_kobject_uevent(struct mapped_device *md, enum kobject_action action, unsigned cookie) { char udev_cookie[DM_COOKIE_LENGTH]; char *envp[] = { udev_cookie, NULL }; if (!cookie) return kobject_uevent(&disk_to_dev(md->disk)->kobj, action); else { snprintf(udev_cookie, DM_COOKIE_LENGTH, "%s=%u", DM_COOKIE_ENV_VAR_NAME, cookie); return kobject_uevent_env(&disk_to_dev(md->disk)->kobj, action, envp); } } uint32_t dm_next_uevent_seq(struct mapped_device *md) { return atomic_add_return(1, &md->uevent_seq); } uint32_t dm_get_event_nr(struct mapped_device *md) { return atomic_read(&md->event_nr); } int dm_wait_event(struct mapped_device *md, int event_nr) { return wait_event_interruptible(md->eventq, (event_nr != atomic_read(&md->event_nr))); } void dm_uevent_add(struct mapped_device *md, struct list_head *elist) { unsigned long flags; spin_lock_irqsave(&md->uevent_lock, flags); list_add(elist, &md->uevent_list); spin_unlock_irqrestore(&md->uevent_lock, flags); } /* * The gendisk is only valid as long as you have a reference * count on 'md'. */ struct gendisk *dm_disk(struct mapped_device *md) { return md->disk; } EXPORT_SYMBOL_GPL(dm_disk); struct kobject *dm_kobject(struct mapped_device *md) { return &md->kobj_holder.kobj; } struct mapped_device *dm_get_from_kobject(struct kobject *kobj) { struct mapped_device *md; md = container_of(kobj, struct mapped_device, kobj_holder.kobj); if (test_bit(DMF_FREEING, &md->flags) || dm_deleting_md(md)) return NULL; dm_get(md); return md; } int dm_suspended_md(struct mapped_device *md) { return test_bit(DMF_SUSPENDED, &md->flags); } int dm_suspended_internally_md(struct mapped_device *md) { return test_bit(DMF_SUSPENDED_INTERNALLY, &md->flags); } int dm_test_deferred_remove_flag(struct mapped_device *md) { return test_bit(DMF_DEFERRED_REMOVE, &md->flags); } int dm_suspended(struct dm_target *ti) { return dm_suspended_md(dm_table_get_md(ti->table)); } EXPORT_SYMBOL_GPL(dm_suspended); int dm_noflush_suspending(struct dm_target *ti) { return __noflush_suspending(dm_table_get_md(ti->table)); } EXPORT_SYMBOL_GPL(dm_noflush_suspending); struct dm_md_mempools *dm_alloc_md_mempools(struct mapped_device *md, unsigned type, unsigned integrity, unsigned per_io_data_size) { struct dm_md_mempools *pools = kzalloc_node(sizeof(*pools), GFP_KERNEL, md->numa_node_id); struct kmem_cache *cachep = NULL; unsigned int pool_size = 0; unsigned int front_pad; if (!pools) return NULL; type = filter_md_type(type, md); switch (type) { case DM_TYPE_BIO_BASED: cachep = _io_cache; pool_size = dm_get_reserved_bio_based_ios(); front_pad = roundup(per_io_data_size, __alignof__(struct dm_target_io)) + offsetof(struct dm_target_io, clone); break; case DM_TYPE_REQUEST_BASED: cachep = _rq_tio_cache; pool_size = dm_get_reserved_rq_based_ios(); pools->rq_pool = mempool_create_slab_pool(pool_size, _rq_cache); if (!pools->rq_pool) goto out; /* fall through to setup remaining rq-based pools */ case DM_TYPE_MQ_REQUEST_BASED: if (!pool_size) pool_size = dm_get_reserved_rq_based_ios(); front_pad = offsetof(struct dm_rq_clone_bio_info, clone); /* per_io_data_size is used for blk-mq pdu at queue allocation */ break; default: BUG(); } if (cachep) { pools->io_pool = mempool_create_slab_pool(pool_size, cachep); if (!pools->io_pool) goto out; } pools->bs = bioset_create_nobvec(pool_size, front_pad); if (!pools->bs) goto out; if (integrity && bioset_integrity_create(pools->bs, pool_size)) goto out; return pools; out: dm_free_md_mempools(pools); return NULL; } void dm_free_md_mempools(struct dm_md_mempools *pools) { if (!pools) return; mempool_destroy(pools->io_pool); mempool_destroy(pools->rq_pool); if (pools->bs) bioset_free(pools->bs); kfree(pools); } static int dm_pr_register(struct block_device *bdev, u64 old_key, u64 new_key, u32 flags) { struct mapped_device *md = bdev->bd_disk->private_data; const struct pr_ops *ops; fmode_t mode; int r; r = dm_grab_bdev_for_ioctl(md, &bdev, &mode); if (r < 0) return r; ops = bdev->bd_disk->fops->pr_ops; if (ops && ops->pr_register) r = ops->pr_register(bdev, old_key, new_key, flags); else r = -EOPNOTSUPP; bdput(bdev); return r; } static int dm_pr_reserve(struct block_device *bdev, u64 key, enum pr_type type, u32 flags) { struct mapped_device *md = bdev->bd_disk->private_data; const struct pr_ops *ops; fmode_t mode; int r; r = dm_grab_bdev_for_ioctl(md, &bdev, &mode); if (r < 0) return r; ops = bdev->bd_disk->fops->pr_ops; if (ops && ops->pr_reserve) r = ops->pr_reserve(bdev, key, type, flags); else r = -EOPNOTSUPP; bdput(bdev); return r; } static int dm_pr_release(struct block_device *bdev, u64 key, enum pr_type type) { struct mapped_device *md = bdev->bd_disk->private_data; const struct pr_ops *ops; fmode_t mode; int r; r = dm_grab_bdev_for_ioctl(md, &bdev, &mode); if (r < 0) return r; ops = bdev->bd_disk->fops->pr_ops; if (ops && ops->pr_release) r = ops->pr_release(bdev, key, type); else r = -EOPNOTSUPP; bdput(bdev); return r; } static int dm_pr_preempt(struct block_device *bdev, u64 old_key, u64 new_key, enum pr_type type, bool abort) { struct mapped_device *md = bdev->bd_disk->private_data; const struct pr_ops *ops; fmode_t mode; int r; r = dm_grab_bdev_for_ioctl(md, &bdev, &mode); if (r < 0) return r; ops = bdev->bd_disk->fops->pr_ops; if (ops && ops->pr_preempt) r = ops->pr_preempt(bdev, old_key, new_key, type, abort); else r = -EOPNOTSUPP; bdput(bdev); return r; } static int dm_pr_clear(struct block_device *bdev, u64 key) { struct mapped_device *md = bdev->bd_disk->private_data; const struct pr_ops *ops; fmode_t mode; int r; r = dm_grab_bdev_for_ioctl(md, &bdev, &mode); if (r < 0) return r; ops = bdev->bd_disk->fops->pr_ops; if (ops && ops->pr_clear) r = ops->pr_clear(bdev, key); else r = -EOPNOTSUPP; bdput(bdev); return r; } static const struct pr_ops dm_pr_ops = { .pr_register = dm_pr_register, .pr_reserve = dm_pr_reserve, .pr_release = dm_pr_release, .pr_preempt = dm_pr_preempt, .pr_clear = dm_pr_clear, }; static const struct block_device_operations dm_blk_dops = { .open = dm_blk_open, .release = dm_blk_close, .ioctl = dm_blk_ioctl, .getgeo = dm_blk_getgeo, .pr_ops = &dm_pr_ops, .owner = THIS_MODULE }; /* * module hooks */ module_init(dm_init); module_exit(dm_exit); module_param(major, uint, 0); MODULE_PARM_DESC(major, "The major number of the device mapper"); module_param(reserved_bio_based_ios, uint, S_IRUGO | S_IWUSR); MODULE_PARM_DESC(reserved_bio_based_ios, "Reserved IOs in bio-based mempools"); module_param(reserved_rq_based_ios, uint, S_IRUGO | S_IWUSR); MODULE_PARM_DESC(reserved_rq_based_ios, "Reserved IOs in request-based mempools"); module_param(use_blk_mq, bool, S_IRUGO | S_IWUSR); MODULE_PARM_DESC(use_blk_mq, "Use block multiqueue for request-based DM devices"); module_param(dm_mq_nr_hw_queues, uint, S_IRUGO | S_IWUSR); MODULE_PARM_DESC(dm_mq_nr_hw_queues, "Number of hardware queues for request-based dm-mq devices"); module_param(dm_mq_queue_depth, uint, S_IRUGO | S_IWUSR); MODULE_PARM_DESC(dm_mq_queue_depth, "Queue depth for request-based dm-mq devices"); module_param(dm_numa_node, int, S_IRUGO | S_IWUSR); MODULE_PARM_DESC(dm_numa_node, "NUMA node for DM device memory allocations"); MODULE_DESCRIPTION(DM_NAME " driver"); MODULE_AUTHOR("Joe Thornber <dm-devel@redhat.com>"); MODULE_LICENSE("GPL");
{ "redpajama_set_name": "RedPajamaGithub" }
1,155
{"url":"http:\/\/www.intuitor.com\/student\/AP_PhysII_Q3_Objectives.php","text":"Mr. Rogers' AP Physics C: E&M (with IB Physics) Objectives Syllabus 1st Quarter 2nd Quarter 3rd Quarter 4th Quarter IB Objectives 3rd Q objectives small investigations IB internal assessment write up specs IB rubrics\nAP Physics C E&M Objectives\n\nAP Physics C E&M Standards\n\nE. Electromagnetism ............................................................16%\n\n1. Electromagnetic induction (including Faraday's law and Lenz's law)\n2. Inductance (including LR and LC circuits) *\n3. Maxwell's equations *\n\nSources of Magnetic Fields (continued)\n Essential Question: Can a magnetic field be used to create a force on an object?\n\nSolenoids\n\n1. Explain Ampere's law. Relevance: Like Gauss's Law, Ampere's Law can be used in deriving other useful equations such as finding the magnetic field generated by a solenoid. These devices are used in numerous applications from door bells to the starter on cars.\n\nB \u2022 ds = mo I\n\n1. Calculate the B- Field inside the object shown below: Keep in mind that both the solenoid and torroid are like stacks of current carrying rings. The magnetic field inside a current carrying ring is strong, but outside the ring is very weak. In the derivations of the equations for solenoids, The magnetic field outside the coils (or rings) is assumed to be zero.\n torroid: B = mo N I 2pr\n\n solenoid: B = mo N I L\n\n current carrying wire: B = mo I r 2pR2\n\nHomefun Prob 23, 25 p.895\n\n Essential Question: How can a magnetic field be used to generate electricity?\n\nMagnetic Flux\n\n1. Mathematically define magnetic flux--roughly speaking, the amount of magnetic field passing through a given surface.\n\nFB = \u00f2 B\u2022dA\n\nNote: both B and A are vectors. The direction of the area vector is perpendicular to the plane of the area. Since the equation is the integral of a dot product of two vectors, magnetic flux is a scalar.\n\n1. Explain briefly how a transformer works and why it requires an AC input.\n2. Nout \/ Nin = Vout \/ Vin\n\n3. Calculate the magnetic flux through a rectangular loop of wire next to a long thin current carrying wire in the same plane (p.850).\n\nFB= [ ( mo I b) \/ 2p ] ln (1 + a\/c)\n\n1. State Gauss's law for magnetism. Why is the magnetic flux through a closed surface zero? Because, unlike E-fields, magnetic fields form closed loops so that a flux line leaving the inside of a closed surface (giving it a positive value) will eventually come back around and enter the surface giving it a negative value). Hence, the two cancel each other out.\n\nB\u2022dA = 0\n\nRelevance: Transformers which make use of magnetic flux to raise and lower voltages are the basis of our electrical transmission system and the reason it uses AC rather than DC power.\n\n Demo: Tesla Coil\n\nUse a Tesla coil to light up a florescent tube from a distance.\n\nQuestions:\n\n1. Both a Tesla coil and a Van de Graaff Generator generate high voltages. What is the difference between a Van de Graaff Generator and Tesla coil?\n\n Essential Question: How would society be different if Faraday's Law of induction did not exist ?\n\n1. State and apply Faraday's Law of Induction. Describe how magnetic flux can be altered with respect to time. Relevance: Faraday's law describes how electricity can be generated by rotating a coil in a magnetic field.\n\ne = - dFB \/ dt\n\nNote: since both time and magnetic flux are scalars the potential or voltage must also be a scalar, which, of course, it is since it's a form of energy.\n\nHomefun (formative summative assessment): prob 1, 5 p.927\n\n Formal Physics Investigation Title Measurement of the acceleration due to gravity using a solenoid. Purpose To observe Faraday's law of induction by dropping a magnet through a solenoid. Overview Connect the terminals of the solenoid to the the LabPro voltage probe. Place the tip of the magnet at the entrance to the solenoid Drop the magnet completely through the solenoid and record the voltage transient. Repeat the process several times\u00a0 except this time sop the magnet at various distances inside the solenoid using the ball of modeling clay. Again record the transient. Data, Calculations By comparing the recordings in step 4 with the original trace, it should be possible to identify the position of the magnet at any points in the voltage where the voltage crosses the axis an goes from positive to negative or vice versa (flips polarity). indicate the magnet's position at these points. Write a short explanation for the transient's appearance noting anything of interest including any points where the voltage flips polarity. Questions, Conclusions Magnets can become demagnetized by repeated impacts. Take appropriate precautions' to pad the impact of the magnet when it lands after falling through the solenoid. Resources\/Materials: Solenoid, cow magnet, meter stick, ball of modeling clay, spacers,\u00a0 computer system set up with Vernier LabPro software and Lab Pro units\n\n Essential Question: Is Lenz's Law a different form of the first law of thermodynamics?\n\nLenz's Law - There's no free lunch\n\n1. State the direction of current in a loop of wire passing through a magnetic field.\n\n2. State Lenz's Law.\n\n3. Use Lenz's Law to determine the direction of current flow in loops of wire with changing magnetic fluxes.\n\nRelevance: Lenz's Law explains why it's impossible to get more electrical energy out of a generator than the mechanical energy required to turn it.\n\nHomefun (formative summative assessment): \u00a0prob 49\n\n Demo: Lenz's Law\n1. Drop a magnet down a copper tube and note the time to fall.\n2. Drop a piece of steel the same size as the magnet down and compare the time of falling to the first case.\n3. Note that the magnet drifts downward much more slowly than the non magnet\n Essential Question: What is the key difference in the generation of AC vs. DC power?\n\nGenerating Voltages With B-Fields\n\n1. Solve motional EMF problems. For a bar of length L moving at constant velocity perpendicular to a B-field:\n\nFB\u00a0 = q v B,\u00a0 \u00a0\u00a0\u00a0 equation 1)\n\ne = work done by FB per unit of charge\n\nNote that the force on each charge (electron) within the metal bar moves the charge the distance L from one end of the bar to the other, hence:\n\ne = FB (L) \/ q\n\nSubstituting for FE from equation 1) yields:\n\ne = q v B (L) \/ q\n\n= v B L\n\n\u2022 Rotating bar (note: v = r w)\n\u2022 Sliding bar\u00a0 (note: v = terminal velocity)\n\u2022 Rotating loop (note: FB = B A cos q)\n\u2022 Loop sliding at constant velocity through a constant B-field (p. 877)\n1. Use Lenz's Law to calculate forces in motional EMF problems.\n\nKey Principle: mechanical power in = electrical power out (in other words rate of energy converted to heat by the circuit's resistance)\n\nHomefun (formative summative assessment): prob 23, 25, 27\n\n Demo: Hand Cranked Generator\n1. Attach a hand cranked generator to a low voltage light bulb.\n2. Crank the generator until the bulb glows brightly.\n3. Crank the generator with nothing attached.\n\nQuestions:\n\n1. Why is there a sense of resistance when cranking the generator while attached to the light bulb but not while attached to nothing?\n2. The generator converts mechanical energy into electrical energy. Is it 100% efficient and could it ever be?\n3. How is the energy conversion different with the generator vs. the process of converting heat to work done by a heat engine?\n Formal Physics Investigation Title Investigation of an AC alternator Purpose Determine the relationship between the frequency output and amplitude of an AC alternator spinning at various rates of rotation. Overview The modified 5 1\/4 inch floppy drive has a DC motor with and integral AC alternator built into the back of the motor. The alternator generates a sine wave voltage output, which at one time was used as a speed control signal for the motor. The alternator is now attached to a BNC jack that can be connected with a coaxial cable\u00a0 to an oscilloscope. Attach the alternator to the oscilloscope and a variable power supply to the DC motor. Run the motor at various speeds and record the amplitude and frequency from the oscilloscope. Data, Calculations Plot the amplitude of the alternator's output vs. the frequency. Questions, Conclusions Why would the frequency be directly proportional to the alternator's rate of rotation? Why would the amplitude of the sine wave increase with the rate of rotation. Describe the relationship between the amplitude and frequency on your plot. Is it linear or non linear and why? Resources\/Materials: Modified floppy drive, coaxial jumper, banana plug wires, variable power supplies, oscilloscope\n\n Essential Question: Why is it essential to have mathematical models for wireless communication ?\n\nMaxwell Equations\n\n1. Describe the electric field from an EMF induced by by a magnetic field and state its general form.\n\nE\u2022ds =\u00a0\u00a0 - dFB \/ dt\n\nNote: E\u2022ds yields units of energy divided by charge, in other words, voltage. (See objective 17. It is just a different form of the above equation.)\n\n1. Calculate the electric field generated outside a solenoid with a radius = R, n coils per unit of length, and a variable current I = Io cos wt.\n\nNote: if the B-field inside a solenoid is changing, it creates an E-field both inside and outside of the solenoid.\n\n B = mo n I = mo n Io cos wt \u222eE\u2022ds = - pR2(- mo n Io w sin wt) E\u20222pr = pR2(mo n Io w sin wt) E = (R2mo n Io w sin wt) \/ (2r)\n1. Be as one with the 4 Maxwell equations. Relevance: The Maxwell equations are the basis for all forms of wireless communication, one of the key areas in electrical engineering.\n Gauss's Law: \u222eE \u2022 da = Q \/\u03b5o Gauss's Law in Magnetism \u222eB \u2022 da = 0 Faraday's Law \u222eE \u2022 ds = - dFB \/ dt Ampere-Maxwell law \u222eB \u2022 ds = moI - eomodFE \/ dt\n\nSummative Assessment: Test objectives 1-21\n\n Demo: Dipole Antennae\n\nUse a dipole antennae connected to a business band radio transmitter to light a nearby florescent tube without making contact with it.\n\nQuestions:\n\n1. Can power be transmitted wirelessly?\n\nAP Physics C E&M Standards\n\nC. Electric circuits (continued)..................................................................20%\n\n1. Current, resistance, power\n2. Steady-state direct current circuits with batteries and resistors only\n3.Capacitors in circuits\n\nb. Transients in RC circuits *\n\n Essential Question: What is a capacitor and why should we care?\n\nHow to Design Giant Capacitors (Chap26 Serway)\n\nRelevance: Capacitors are key electronic components found in almost any circuit. They act like springs, storing and releasing electrical energy. Capacitors can reduce otherwise harmful voltage fluctuations in a circuit.\n\n1. Define capacitance mathematically (p. 743).\nC = Q\/V\n1. Calculate capacitance for a parallel plate capacitor (p. 743).\nC = K* eo * A\/d\n1. Calculate the energy stored in a capacitor.\nU = 1\/2 *C*V^2\n1. Calculate and describe the E-field in a capacitor.\n\n2. Solve capacitor circuit problems.\n\n3. Solve problems in which dielectric material is inserted or removed (p.751).\n\nBattery attached:\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 voltage = constant,\u00a0\u00a0 charge = variable\nDetached from Battery:\u00a0\u00a0 voltage = variable,\u00a0\u00a0\u00a0 charge = constant\n\nHomefun (formative summative assessment): Questions 1-10 p. 762; prob. 11, 15, 29, 33, 73 p.764-769\n\n Video: Demonstration of Electrostatic Percipitator\n\nShow video of various capacitor demonstrations. Explain that a capacitor is an energy storage device like a spring.\n\nQuestions:\n\n1. Why would a capacitor be useful in power supply designed to convert AC into DC?\n2. What type of power do most electronic devices use internally?\n3. What is the most obvious way to increase the capacitance of a capacitor without changing the volume of the device. In other words, without making it large.\n\n Essential Question: How are resistors, capacitors, and inductors analogous to elements in mechanics?\n\nRC\u00a0 Circuits\n\n1. State how a capacitor behaves at time = 0 and infinity.\n2. Solve RC circuit problems using the above principle for how capacitors behave at time equals zero and infinity.\n3. Use the above principles to sketch the following curve for a charging capacitor:\n\u2022 charge vs time\n\u2022 current vs time\n1. Use the above principles to sketch the following curve for a discharging capacitor:\n\u2022 charge vs time\n\u2022 current vs time\n\u2022 voltage vs time\n1. Using Kirchhoff's Law, write the differential equation for an RC circuit (p.808).\n\ne = q \/ C\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 for a capacitor\n\ne - iR -\u00a0 q \/ C = 0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 for an RC circuit\n\nCe - (dq\/dt) RC - q = 0\n\nRC(dq\/dt)\u00a0 + q - Ce = 0\n\n1. For a charging RC circuit (p.808) Calculate the following:\n\u2022 charge vs time\n\u2022 current vs time\n\u2022 time constant\n1. For a discharging RC circuit (p.808) Calculate the following:\n\u2022 charge vs time\n\u2022 current vs time\n\u2022 voltage vs time\n\u2022 time constant\n1. Describe how time constant could be used to measure the capacitance of an unknown capacitor.\n\nt = RC\n\nHomefun\u00a0 (formative summative assessment): prob. 43, 44, 45 p. 824\n\n Formal Physics Investigation Title Investigation of the Discharging Curve for a Capacitor Purpose To estimate the capacitance of a capacitor using time constant and a voltage drop vs time curve for capacitor discharging through a known resistor. Models t = RC, v = vo e -t \/ RC Overview Turn the power supply to its lowest voltage setting. (We will be using only DC voltage). With the power supply off, connect the positive terminal of the capacitor to a resistor and the resistor to the positive terminal of the power supply. Complete the circuit by connecting the negative terminal of the power supply to the negative terminal of the capacitor. Connect the Vernier voltage probes to the capacitor just like a voltmeter. Turn on the power supply and charge the capacitor to 6 volts while collecting data on the charging curve. Disconnect the power supply and discharge the capacitor through the resistor while collecting data with the Vernier system. Safety Issues Unlike batteries, capacitors can discharge their energy almost instantaneously. Shorting one out will very likely result in equipment damage and possibly a fire. Capacitors must be both charged and discharged through a resistor or damage will result. Equipment Limitations Do not exceed 10 volts. Higher voltages may damage the LabPro's electronics Resources\/Materials: Large sized capacitor, resistor of known value, computer system set up with Vernier LabPro software and Lab Pro units, wires, power supply\n\nAP Physics C E&M Standards\n\nE. Electromagnetism (continued)............................................................16%\n\n1. Electromagnetic induction (including Faraday's law and Lenz's law)\n2. Inductance (including LR and LC circuits) *\n3. Maxwell's equations *\n\nLR Circuits (Chap. 32 Serway)\n\nRelevance: Inductors are a key element in numerous types of electronic devices including computer power supplies.\n\nAP Physics C E&M Standards - E. Electromagnetism (16 %), 2. Inductance (including LR circuits)\n\n1. State how an inductor behaves at time = 0 and infinity.\n2. Solve LR circuit problems using the above principle for how inductors behave at time equals zero and infinity.\n3. Use the above principles to sketch the current vs time for a charging inductor.\n4. Use the above principles to sketch the current vs time and voltage vs time curve for a discharging inductor:\n5. For a charging LR circuit (p.944) Calculate the following:\n\u2022 current vs time\n\u2022 time constant\nNote:\neL = - L (di\/dt)\n\nUsing Kirchhoff's Law\n\ne \u00a0- L (di\/dt) - Ri = 0\n\nL (di\/dt) + Ri - e = 0\n\n1. For a discharging LR circuit (p.944) Calculate the following:\n\u2022 current vs time\n\u2022 voltage vs time\n\u2022 time constant\n1. Describe how time constant could be used to measure the inductance of an unknown inductor.\n\nt = L \/ R\n\n1. Calculate the energy stored in an inductor.\n\nUL = 1\/2 L I2\n\nHomefun (formative summative assessment): 17, 19, 21 p. 957\n\n Hollywood Video Clip: The Core\n\nShow a video clip of the Virgil's crew communicating with the surface from deep within the Earth.\n\nQuestions:\n\n1. How could a vehicle communicate with the surface from inside the Earth?\n2. Is there such a thing as a radio controlled submarine?\n\nThe Skin Depth Equation (Wikipedia)\n\nThe current density J in an infinitely thick plane conductor decreases exponentially with depth \u03b4 from the surface, as follows:\n\n$J=J_\\mathrm{S} \\,e^{-{\\delta \/d}}$\n\nwhere d is a constant called the skin depth. This is defined as the depth below the surface of the conductor at which the current density decays to 1\/e (about 0.37) of the current density at the surface (JS). It can be calculated as follows:\n\n$d=\\sqrt{{2\\rho }\\over{\\omega\\mu}}$\n\nwhere\n\n\u03c1 = resistivity of conductor\n\u03c9 = angular frequency of current = 2\u03c0 \u00d7 frequency\n\u03bc = absolute magnetic permeability of conductor $= \\mu_0 \\cdot \\mu_r$, where \u03bc0 is the permeability of free space and \u03bcr is the relative permeabilty of the conductor.\n\n Essential Question: Can an electrical circuit resonate and why would this be important?\n\nLC and RLC Circuits (Chap. 32 Serway)\n\n1. Write an energy balance equation for an LC circuit. (see the Physics of Resonance - Electrical Circuits, also see p. 949 Serway)) Relevance: Resonating LC circuits are the basis of wireless communication and electronic music.\n\nU = U C + UL\n\n1. Calculate the frequency of an LC circuit.\nU = U C + UL\n\nU = Q2\/(2C) + i2 (L \/ 2)\n\nbut\n\nQmax2\/(2C) = imax2 (L \/ 2)\n\nQmax2\/(2C) = (- \u03c9Qmax)2 (L \/ 2)\n\n\u03c9 = [ 1 \/ (LC) ]0.5\n\n f = 1 \/ [2p (LC) 0.5 ]\nQ = Qmax(cos \u03c9t)\n\ndQ\/dt = - \u03c9Qmax(sin \u03c9t)\n\ni = - \u03c9Qmax(sin \u03c9t)\n\nimax = - \u03c9Qmax\n\n1. Draw an analogy between an LC circuit and a spring and mass system.\n2. Draw an analogy between an RLC circuit and a spring and mass system.\n Electrical Mechanical Capacitance\u00a0\u00a0 C (1\/C) q Spring k x Resistance\u00a0\u00a0\u00a0\u00a0\u00a0R R dq\/dt Viscous damper b dx\/dt Inductance\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0L L (d2q\/dt2) Inertia m (d2x\/dt2)\n1. Explain the difference between dampening and damping.\n\nSummative Assessment: Test objectives 1-26\n\nDemonstration from\u00a0 The Physics of Resonance.\n\n1. Briefly explain how a crystal radio works.\n2. Connect a crystal radio to an oscilloscope. Inductively couple a DC power supply to the crystal radio and give it a pulse.\n3. Observe the decaying sine wave.\n\nThe crystal radio is configured to resonate at the radio station's frequency it is tuned to. Subjecting the radio to a pulse is like striking a bell: these respective action cause the both the bell and the electrical circuits to resonate with the bell emitting sound at its natural frequencies and the radio emitting radio waves.\n\nQuestions:\n\n1. Why does an extremely strong electromagnetic pulse of energy, such as would be produced by an atomic bomb tend to wipe out wireless communications?\n\nMr\n\nSAM Team--Southside High School's STEM and Computer Science extra-curricular club (Mr. Rogers Sponsor)\n\nMr. Rogers Teacher's Blog\n\nMr. Rogers T-shirts\n\nMr. Rogers Information for Teachers\n\nMr. Rogers Science Fair Information\n\nCheck out other web sites created by Mr. R:\n\nCheck out Articles by Mr. Rogers:\n\nInsultingly Stupid Movie Physics is one of the most humorous, entertaining, and readable physics books available, yet is filled with all kinds of useful content and clear explanations for high school, 1st semester college physics students, and film buffs.\n\nIt explains all 3 of Newton's laws, the 1st and 2nd laws of thermodynamics, momentum, energy, gravity, circular motion and a host of other topics all through the lens of Hollywood movies using Star Trek and numerous other films.\n\nIf you want to learn how to think physics and have a lot of fun in the process, this is the book for you!\n\n First the web site, now the book!\n\nMr. Rogers Home | Common Sylabus | AP Comp Sci I | AP Comp Sci II | AP Physics Mech | AP Physics E&M | AP Statistics | Honors Physics|IB Design Tech | Southside\n\n[ Intuitor Home | Physics | Movie Physics | Chess | Forchess | Hex | Intuitor Store |","date":"2018-04-22 16:35:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 3, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5239710807800293, \"perplexity\": 2158.703479220149}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-17\/segments\/1524125945624.76\/warc\/CC-MAIN-20180422154522-20180422174522-00144.warc.gz\"}"}
null
null
{"url":"http:\/\/www.numdam.org\/item\/COCV_2009__15_1_214_0\/","text":"Minimizing movements for dislocation dynamics with a mean curvature term\nESAIM: Control, Optimisation and Calculus of Variations, Tome 15 (2009) no. 1, pp. 214-244.\n\nWe prove existence of minimizing movements for the dislocation dynamics evolution law of a propagating front, in which the normal velocity of the front is the sum of a non-local term and a mean curvature term. We prove that any such minimizing movement is a weak solution of this evolution law, in a sense related to viscosity solutions of the corresponding level-set equation. We also prove the consistency of this approach, by showing that any minimizing movement coincides with the smooth evolution as long as the latter exists. In relation with this, we finally prove short time existence and uniqueness of a smooth front evolving according to our law, provided the initial shape is smooth enough.\n\nDOI : https:\/\/doi.org\/10.1051\/cocv:2008027\nClassification : 53C44,\u00a0 49Q15,\u00a0 49L25,\u00a0 28A75,\u00a0 58A25\nMots cl\u00e9s : front propagation, non-local equations, dislocation dynamics, mean curvature motion, viscosity solutions, minimizing movements, sets of finite perimeter, currents\n@article{COCV_2009__15_1_214_0,\nauthor = {Forcadel, Nicolas and Monteillet, Aur\\'elien},\ntitle = {Minimizing movements for dislocation dynamics with a mean curvature term},\njournal = {ESAIM: Control, Optimisation and Calculus of Variations},\npages = {214--244},\npublisher = {EDP-Sciences},\nvolume = {15},\nnumber = {1},\nyear = {2009},\ndoi = {10.1051\/cocv:2008027},\nmrnumber = {2488577},\nlanguage = {en},\nurl = {http:\/\/www.numdam.org\/item\/COCV_2009__15_1_214_0\/}\n}\nForcadel, Nicolas; Monteillet, Aur\u00e9lien. Minimizing movements for dislocation dynamics with a mean curvature term. ESAIM: Control, Optimisation and Calculus of Variations, Tome 15 (2009) no. 1, pp. 214-244. doi : 10.1051\/cocv:2008027. http:\/\/www.numdam.org\/item\/COCV_2009__15_1_214_0\/\n\n[1] F. Almgren, J.E. Taylor and L. Wang, Curvature-driven flows: a variational approach. SIAM J. Control Optim. 31 (1993) 387-438. | MR 1205983 | Zbl 0783.35002\n\n[2] O. Alvarez, P. Cardaliaguet and R. Monneau, Existence and uniqueness for dislocation dynamics with nonnegative velocity. Interfaces Free Boundaries 7 (2005) 415-434. | MR 2191694 | Zbl 1099.35148\n\n[3] O. Alvarez, E. Carlini, R. Monneau and E. Rouy, A convergent scheme for a nonlocal Hamilton-Jacobi equation, modeling dislocation dynamics. Num. Math. 104 (2006) 413-572. | MR 2249672 | Zbl 1103.65100\n\n[4] O. Alvarez, P. Hoch, Y. Le Bouar and R. Monneau, Dislocation dynamics: short time existence and uniqueness of the solution. Arch. Rational Mech. Anal. 85 (2006) 371-414. | MR 2231781 | Zbl 1158.74335\n\n[5] L. Ambrosio, Minimizing movements. Rend. Accad. Naz. Sci. XL Mem. Mat. Appl. 19 (1995) 191-246. | MR 1387558 | Zbl 0957.49029\n\n[6] L. Ambrosio, N. Gigli and G. Savar\u00e9, Gradient flows in metric spaces and in the space of probability measures, Lectures in Mathematics ETH Z\u00fcrich. Birkh\u00e4user Verlag, Basel (2005). | MR 2129498 | Zbl 1090.35002\n\n[7] G. Barles, P. Cardaliaguet, O. Ley and R. Monneau, Global existence results and uniqueness for dislocation type equations. SIAM J. Math. Anal. (to appear). | MR 2403312 | Zbl 1158.49029\n\n[8] G. Barles and C. Georgelin, A simple proof of convergence for an approximation scheme for computing motions by mean curvature. SIAM J. Numer. Anal. 32 (1995) 484-500. | MR 1324298 | Zbl 0831.65138\n\n[9] G. Barles and O. Ley, Nonlocal first-order hamilton-jacobi equations modelling dislocations dynamics. Comm. Partial Differential Equations 31 (2006) 1191-1208. | MR 2254611 | Zbl 1109.35026\n\n[10] G. Barles, H.M. Soner and P.E. Souganidis, Front propagation and phase field theory. SIAM J. Control Optim. 31 (1993) 439-469. | MR 1205984 | Zbl 0785.35049\n\n[11] E. Bombieri, Regularity theory for almost minimal currents. Arch. Rational Mech. Anal. 78 (1982) 99-130. | MR 648941 | Zbl 0485.49024\n\n[12] P. Cardaliaguet, On front propagation problems with nonlocal terms. Adv. Differential Equations 5 (2000) 213-268. | MR 1734542 | Zbl 1029.53081\n\n[13] P. Cardaliaguet and O. Ley, On the energy of a flow arising in shape optimisation. Interfaces Free Bound. (to appear). | MR 2453130 | Zbl 1159.35076\n\n[14] P. Cardaliaguet and D. Pasquignon, On the approximation of front propagation problems with nonlocal terms. ESAIM: M2AN 35 (2001) 437-462. | Numdam | MR 1837079 | Zbl 0992.65097\n\n[15] L.C. Evans and R.F. Gariepy, Measure theory and fine properties of functions, Studies in Advanced Mathematics. CRC Press, Boca Raton, FL (1992). | MR 1158660 | Zbl 0804.28001\n\n[16] L.C. Evans and J. Spruck, Motion of level sets by mean curvature. II. Trans. Amer. Math. Soc. 330 (1992) 321-332. | MR 1068927 | Zbl 0776.53005\n\n[17] H. Federer, Geometric measure theory, Die Grundlehren der mathematischen Wissenschaften, Band 153. Springer-Verlag New York Inc., New York (1969). | MR 257325 | Zbl 0176.00801\n\n[18] N. Forcadel, Dislocations dynamics with a mean curvature term: short time existence and uniqueness. Differential Integral Equations 21 (2008) 285-304. | MR 2484010\n\n[19] M. Giaquinta, Multiple integrals in the calculus of variations and nonlinear elliptic systems, Annals of Mathematics Studies 105. Princeton University Press, Princeton, NJ (1983). | MR 717034 | Zbl 0516.49003\n\n[20] Y. Giga and S. Goto, Geometric evolution of phase-boundaries, in On the evolution of phase boundaries (Minneapolis, MN, 1990-1991), IMA Vol. Math. Appl. 43, Springer, New York (1992) 51-65. | MR 1226914 | Zbl 0771.35027\n\n[21] E. Giusti, Minimal surfaces and functions of bounded variation, Monographs in Mathematics 80. Birkh\u00e4user Verlag, Basel (1984). | MR 775682 | Zbl 0545.49018\n\n[22] S. Luckhaus and T. Sturzenhecker, Implicit time discretization for the mean curvature flow equation. Calc. Var. Partial Differential Equations 3 (1995) 253-271. | MR 1386964 | Zbl 0821.35003\n\n[23] A. Lunardi, Analytic semigroups and optimal regularity in parabolic problems, Progress in Nonlinear Differential Equations and their Applications 16. Birkh\u00e4user Verlag, Basel (1995). | MR 1329547 | Zbl 0816.35001\n\n[24] Y. Maekawa, On a free boundary problem of viscous incompressible flows. Interfaces Free Bound. 9 (2007) 549-589. | MR 2358216 | Zbl 1132.76303\n\n[25] F. Morgan, Geometric measure theory. A beginner's guide. Academic Press Inc., Boston, MA (1988). | MR 933756 | Zbl 0671.49043\n\n[26] R. Schoen, L. Simon and F.J. Almgren, Jr., Regularity and singularity estimates on hypersurfaces minimizing parametric elliptic variational integrals. I, II. Acta Math. 139 (1977) 217-265. | MR 467476 | Zbl 0386.49030\n\n[27] L. Simon, Lectures on geometric measure theory, in Proceedings of the Centre for Mathematical Analysis, Vol. 3, Australian National University Centre for Mathematical Analysis, Canberra (1983). | MR 756417 | Zbl 0546.49019\n\n[28] P. Soravia and P.E. Souganidis, Phase-field theory for FitzHugh-Nagumo-type systems. SIAM J. Math. Anal. 27 (1996) 1341-1359. | MR 1402444 | Zbl 0863.35047","date":"2021-04-17 00:24:22","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2887952923774719, \"perplexity\": 2264.811356139999}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618038092961.47\/warc\/CC-MAIN-20210416221552-20210417011552-00160.warc.gz\"}"}
null
null
So SO SOOOOOOO much to do today. The winner of Week 1(The Pattern Remix Challenge) is Danielle from My Sparkle with her lovely "Every Color in the Box Dress". What a beautiful design! And those colors are delicious! Next up is an item of business that we would rather just skip. (It's the part where we have to say goodbye to one of the designers.) Honestly, as much as we love this competition I don't think we will ever get used to letting anyone go. Obviously we don't want anyone to go---we invited them all here to play. But our first friend to leave us is Susan from Living With Punks. (At least we can still pop on over to her blog and check out her on going awesomeness...like the kaleidoscope blanket that she just made...it's so cool.) Thank you Susan for sharing your amazing talent with us. Rachel was chosen by our judges to be the winner of the at home sew along contest. Rachel will be receiving a $25 gift certificate to Fabric.com. We could not, really could not believe all the awesome entries we had uploaded to the Flicker group for this week's sew along. We loved watching all the projects coming in and decided early on that we would need to do a montage of all our favorite projects. But then reality set in and soon we realized that our montage would literally be the entire flicker group. Today is the beginning of week 2 and so starts our 2nd challenge---sewing for boys. That's right it's BOY'S WEEK!!! The next 5 days will be dedicated entirely to creating clothing for all the little men in our lives. And, the Flicker group is waiting.....so start up loading the photos of your boys projects...we will be watching anxiously. Whew! Ok, I think that is everything. Congratulations Danielle and Susan! Both dresses were amazing! I really look forward to shopping @ fabric.com !!! Congratulations! Both dresses are beautiful!
{ "redpajama_set_name": "RedPajamaC4" }
1,155
Hoop Nut 2015 PBA Governors' Cup Alaska Aces Ginebra Global Port PBA Purefoods Star #PBA2015 Govs Cup Roundup: June 26, 2015 #PBA2015 Govs Cup Roundup: June 26, 2015 June 26, 2015 2015 PBA Governors' Cup Alaska Aces Ginebra Global Port PBA Purefoods Star The 2015 PBA Govs' Cup playoffs kicked off with a tremendously one-sided victory over higher-seeded GlobalPort followed by an equally tremendous Ginebra meltdown in the face of a spirited Alaska fightback. Game recap (adapted from Interaksyon): PUREFOODS STAR HOTSHOTS over GLOBALPORT BATANG PIER, 126-73 The Star Hotshots left little doubt of their ability to go all the way in the PBA Governors' Cup after utterly dominating GlobalPort, 126-73, in a one-sided quarterfinal series opener on Friday at the SMART-Araneta Coliseum. Despite coming in as the No. 5 seed against the Batang Pier, the Hotshots flashed awe-inspiring form as they forced a do-or-die clash for a spot in the semifinals. It was the fourth largest margin in PBA history, according to PBA head of statistics Fidel Mangonon III. Marqus Blakely posted impressive numbers with 23 points, 9 rebounds, 5 assists, 2 steals, and 2 blocks to pace several Star players with big contributions. Mark Barroca scored 16 points on 7-for-9 shooting while also dishing off 5 assists. Alex Mallari and James Yap both scored 15 points, while three other locals scored eight or more points. On the other end, Terrence Romeo led the way with 18 points and 4 triples, while Jarrid Famous underwhelmed with only 16 points on 5/13 FG shooting. Star guards Omar Krayem and Stanley Pringle also disappointed with a combined 8 markers. Line of the Game: Marqus Blakely (PUR) - 23pts, 9rebs, 5asts, 2stls, 2blks, 10/11 FGs. James Yap and the rest of the Hotshots successfully forced a rubber match against GlobalPort. Marqus Blakely soared over and dominated the Batang Pier. Terrence Romeo fought hard, but his team still fell in such a bad way. ALASKA ACES over BGY. GINEBRA GIN KINGS, 114-108 The top-seeded Alaska Aces needed to come back from 16 points down but found their game at just the right time as they eliminated Barangay Ginebra from the PBA Governors' Cup quarterfinals, 114-108, on Friday at the SMART-Araneta Coliseum. Alaska, which had a twice-to-beat advantage heading into this series, advanced to the semifinals, where they will await the winner of the GlobalPort-Star pairing. Import Romeo Travis finished with 29 points, 21 rebounds, and 6 assists to lead the Aces, while Vic Manuel and Sonny Thoss each added 14. Alaska fell behind, 65-49, early in the third but mounted a huge comeback behind Dondon Hontiveros, who scored eight points in an 18-1 run that put them ahead late in the quarter. Hontiveros finished with 13 markers. Cyrus Baguio also reached double figures with 10. The Kings pushed their advantage to a high of 16 in the third quarter before the Aces finally found its rhythm. A confrontation between Calvin Abueva and Johnson seemed to fire up Alaska as they went on a blistering 18-1 run that swung things in their favor. Alaska led, 77-75, at the end of the third, and then took care of business in the final quarter, where they led by as many as 13, to cruise to the win. Ginebra's final game of the season saw Orlando Johnson drop 28 points, while Sol Mercado added 21. Line of the Game: Romeo Travis (ALA) - 29pts, 21rebs, 6asts, 3blks, 1stl. Vic Manuel helped the Aces erase a double-digit deficit and eliminate the Ginebra Gin Kings. Images from the PBA.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,657
{"url":"http:\/\/www.physicsforums.com\/showthread.php?t=337168","text":"## Quantized energy in infinite potential well\n\nHow does energy become quantized in an infinite potential well??\n Recognitions: Science Advisor How do the harmonics of a finite string become quantized?\n Honestly from a mathematical standpoint it comes about due to the boundary conditions of the Schrodinger equation for an infinite potential well.\n\n## Quantized energy in infinite potential well\n\n Quote by rozan977 How does energy become quantized in an infinite potential well??\nThe proper functions (eigenfunctions) of the Hamiltonian are sin(\u03c0nz) where z is the dimensionless length z=x\/L. The proper values (eigenvalues) are proportional to (\u03c0n)2.\n\nAny, I repeat, any wave inside the well can be decomposed in a sum of proper waves with some amplitudes. In general case the wave energy is not certain but dispersed. Only in the eigenstates the energy is certain.\n\n Quote by rozan977 How does energy become quantized in an infinite potential well??\nAs the frequencies of a string in a guitar $$E_n = h v_n = n h v$$\n\n That is through (periodic) boundary conditions.\n For this aspect, Pythagoras was the first to study a problem of QM mechanics.\n\n Quote by Feldoh Honestly from a mathematical standpoint it comes about due to the boundary conditions of the Schrodinger equation for an infinite potential well.\nBut what if the solution we assume of Schrodinger equation be in exponential form??\n\n Quote by rozan977 But what if the solution we assume of Schrodinger equation be in exponential form??\nit is completely equivalent.\n\n Quote by rozan977 But what if the solution we assume of Schrodinger equation be in exponential form??\nIn order to satisfy the boundary conditions the two complex exponentials have to have certain coefficients that make their sum to be sin(pi*n*x\/L).","date":"2013-05-23 11:42:42","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7279794216156006, \"perplexity\": 857.8231440789993}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368703298047\/warc\/CC-MAIN-20130516112138-00080-ip-10-60-113-184.ec2.internal.warc.gz\"}"}
null
null
{"url":"http:\/\/openstudy.com\/updates\/55b26e5ee4b0d48ca8ee57fd","text":"## Michele_Laino one year ago A \"Mechanics\" challenge\n\n1. Michele_Laino\n\nLet's suppose a colllision between a neutron which is moving with a velocity V1, and a nucleus at rest. The unit of measure of the masses is the nucleon mass, so the neutron has mass equal to 1, whereas the mass of the nucleus is A, where A is the mass number of that nucleus. 1) find the velocity of the center of mass of the system neutron-nucleus 2) show that with respect to the center of mass, the total momentum of the system nucleus-neutron is the null vector @IrishBoy123 @Empty @Astrophysics\n\n2. Michele_Laino\n\n|dw:1437757810812:dw|\n\n3. Michele_Laino\n\nhint: the velocity of the center of mass of a system composed by N particles, whose mass are m_i and velocities are v_i, is given by the subsequent fromula: $\\Large {{\\mathbf{v}}_{CM}} = \\frac{{\\sum\\limits_1^N {{m_i}{{\\mathbf{v}}_i}} }}{{\\sum\\limits_1^N {{m_i}} }}$\n\n4. IrishBoy123\n\n*first* bit: |dw:1437768326539:dw| $$\\huge \\vec v_{cm}=\\frac {\u2211^N_i \\ m_i \\vec v_i}{\u2211^N_1 \\ m_i}$$ conservation of momentum: $$\\large \\vec v_1 = \\vec v_2 + A \\vec v_f$$ $$\\huge \\vec v_{cm}=\\frac { \\vec v_2 + A \\vec v_f }{A + 1} = \\frac {\\vec v_1 }{A + 1}$$ if that's totally off beam, pls advise.\n\n5. anonymous\n\nUsing conservation of momentum,$m _{neutron}v _{1} + m _{nucleus}v _{nucleus} = m _{neutron}v _{1}^{'} + m _{nucleus}v _{nucleus}^{'}$$\\left( 1 \\right) v _{1} + A \\left( 0 \\right)= \\left( 1 \\right)v{'} + A \\left( v^{'} \\right)$$v^{'} = \\frac{ v_1 }{ A+1 }$I do not understand part 2. The system has non-zero momentum before the collision and exactly the same momentum after the collision. How can it be zero?\n\n6. IrishBoy123\n\nfor second part using conservation of energy: $$\\vec v_1 ^2 = \\vec v_2^2 + A \\vec v_f^2$$ $$(\\vec v_1 - \\vec v_2)\\bullet (\\vec v_1 + \\vec v_2) = A (\\vec v_f \\bullet \\vec v_f)$$ from conservation momentum above: $$\\vec v\u20d7_1-v\u20d7_2=A v\u20d7_f$$ [A] $$A v\u20d7_f \\bullet (\\vec v_1 + \\vec v_2) = A \\vec v_f \\bullet \\vec v_f$$ $$\\vec v_1 + \\vec v_2 = \\vec v_f$$ [B] [A] and [B] give : $$\\large \\vec v_f = \\frac{2 \\vec v_1}{1+A}$$ $$\\large \\vec v_2 = \\frac{ \\vec v_1 (1-A)}{1+A}$$ and these follow also from $$\\vec v_{cm}$$ in previous post: $$\\large \\vec v_f - \\vec v_{cm} = \\frac{ \\vec v_1}{1+A}$$ $$\\large \\vec v_2 - \\vec v_{cm} = \\frac{ -\\vec v_1A}{1+A}$$ momentum $$\\vec p$$ of system measured from the reference frame of the centre of mass is: $$\\vec p = A (\\vec v_f - \\vec v_{cm})+ (1)(\\vec v_2 - \\vec v_{cm} )\\\\ = A \\frac{ \\vec v_1}{1+A} + (1) \\frac{ -\\vec v_1A}{1+A} = \\vec 0$$\n\n7. IrishBoy123\n\nin order to verify conservation of momentum from reference frame of centre of mass, looking at the \"before\" picture: $$\\large \\vec v_{cm_o} =\\frac{(1)\\vec v_1+A\\vec0}{1 + A} = \\frac{\\vec v_1}{1 + A}$$ ie same from reference frame of centre of mass, momentum $$\\vec p_o$$ was: $$\\large \\vec p_o = (1) (\\vec v_1 - v_{cm_o} ) + A(\\vec 0 - v_{cm_o})$$ $$\\large \\ \\ = (1) (\\vec v_1 -\\frac{\\vec v_1}{1+A} ) + A( - \\frac{\\vec v_1}{1 + A}) = \\vec 0$$\n\n8. Michele_Laino\n\nnicely done!! :) @IrishBoy123","date":"2016-10-24 12:42:49","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8594005107879639, \"perplexity\": 738.6277348937913}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-44\/segments\/1476988719566.74\/warc\/CC-MAIN-20161020183839-00566-ip-10-171-6-4.ec2.internal.warc.gz\"}"}
null
null
If you're looking for a shutter repair service London, one thing you need to ensure is that the work is carried out by those with the right level of experience and expertise. When it comes to shutter repair service London businesses and residents want a repairs professional that offer great service and quality as well as affordable pricing. This is where our team of experts can help, as we have years of experience within this industry and we are able to carry out repairs on all types of shutters. If you require a shutter repair service in London, you can turn to us for a fast and efficient service that is delivered by experts. This means you get the peace of mind that the repair on your shutter will be carried out to the highest standard and that you can look forward to exemplary levels of service. Whether you need a general shutter repair or an emergency one, our London experts can help. All you need to do is contact us and speak to a member of the team so that we can arrange for a shutter repair professional to come out to you and carry out the work.
{ "redpajama_set_name": "RedPajamaC4" }
2,478
View source for About Us ← About Us The work for the Digitally Left Behind Community is led by a small group of IT industry, academic and technical leaders from around the globe. The following people are actively contributing to this work: ====[https://www.linkedin.com/in/dan-bailey-579b57/?originalSubdomain=uk Dan Bailey]==== Chief Technology Officer for IBM UK & Ireland Services. He has over 24 years of experiences in the IT industry, having worked across the globe in numerous industries. ====[https://www.linkedin.com/in/leah-dienger/ Leah Dienger, MSW]==== An agent of change and innovation, Leah brings to IBM over 20 years of experience as a social worker and advocate for vulnerable populations. At IBM she is a Government Health & Human Services Advisor, working with clients to design solutions to support efficiency in social program delivery, encourage equitable distribution of limited resources, and ultimately improve outcomes for individuals in need. ====[https://www.linkedin.com/in/hopkira/ Richard Hopkins, FREng]==== Richard was the nineteenth President of IBM's Academy of Technology, an IBM Distinguished Engineer and a Fellow of the Royal Academy of Engineering. He has an extensive history of delivering complex IT for the UK Government, including many systems that intersect with the Digitally Left Behind Community. ====[https://www.linkedin.com/in/libertyjacklin/ Liberty Jacklin]==== Liberty Jacklin is a Senior Software Engineer at IBM, where she has worked across multiple industries. She is passionate about helping people learn technical skills, improving diversity and inclusion in technical roles and making sure technology is only used as a force for good. ====[https://www.linkedin.com/in/anjumehta/ Anju Mehta]==== Anju Mehta is Technical Solution Leader in IBM, India. She has experience in Technical Sales, designing and implementing solutions for Corporates. She passion for Automation and Cognitive Intelligence, which she implements in her solutions. ====[https://www.linkedin.com/in/ian-nussey-9925338/ Dr Ian Nussey, OBE FREng] ==== Ian spent over 50 years working for IBM in engineering leadership roles. Since retirement he has spent eight years on the Engineering Council, chairing the Academy and Royal Society's Science and Engineering Policy Studies Unit and serving as inaugural chair of the Proactive Membership Committee. ====[https://www.linkedin.com/in/mauriceperks/ Dr Maurice Perks]==== Dr Maurice Perks is a retired IBM Fellow, with more than 40 years' experience in IT systems across several sectors. He spends one day a week helping the DLBC at an IT walk-in centre. ====[https://www.linkedin.com/in/schottthomas/ Dr Thomas Schott]==== Tom is a recently retired program manager from IBM's cloud, cognitive and consulting organizations specializing in software development and solutions with global and distributed teams. ====[https://www.linkedin.com/in/phil-smith-cbe-a2332220/ Phil Smith, CBE FREng]==== Phil Smith is a systems engineer. At Philips Electronics, he led complex IT security and large network developments. At IBM, he built the company's first-of-its-kind corporate network performance measurement system and designed its European TCP/IP network. At Cisco, he created some of the world's biggest business-oriented IP networks. While its UK and Ireland Chief Executive and Chairman, he used some of the $1.5B inward investment he secured to create Innovation Centres and Technology Incubators. ====[https://www.linkedin.com/in/nickvlastaras/ Nikolaos Vlastaras]==== Nikolaos serves as the Sales and Solution Office Lead of IBM's Client Innovation Center in the Netherlands with experience as project manager and design thinking advocate in numerous industries. He has a passion and research background in cognitive engineering for human-centered automation design and performance. ====[https://www.linkedin.com/in/chriscwinter/ Chris Winter]==== Independent IT consultant, IBM Fellow retired and Royal Academy of Engineering Visiting Professor at the University of Plymouth. His IT career began in 1969 ====[https://www.linkedin.com/in/maria-wolters-9649942/ Dr Maria Wolters]==== Maria's research focuses on supporting people with long-term conditions live rich and meaningful lives. She has a background in computational linguistics and speech science (PhD, 2000, University of Bonn), human-computer interaction, assistive technology, and eHealth, and a long-term interest in statistics. Return to About Us. Retrieved from "http://dlbc.info/wiki/About_Us"
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
450
package com.amarin.mywaiter.fragment; import android.app.Activity; import android.app.AlertDialog; import android.app.Fragment; import android.content.Context; import android.content.DialogInterface; import android.os.AsyncTask; import android.os.Bundle; import android.support.annotation.Nullable; import android.support.v7.widget.DefaultItemAnimator; import android.support.v7.widget.LinearLayoutManager; import android.support.v7.widget.RecyclerView; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ViewSwitcher; import com.amarin.mywaiter.R; import com.amarin.mywaiter.adapter.MenuRecyclerViewAdapter; import com.amarin.mywaiter.facade.OnDishSelectedListener; import com.amarin.mywaiter.model.Allergen; import com.amarin.mywaiter.model.Dish; import com.amarin.mywaiter.model.Restaurant; import com.amarin.mywaiter.utils.MyWaiterConstants; import org.json.JSONArray; import org.json.JSONObject; import java.io.InputStream; import java.math.BigDecimal; import java.net.HttpURLConnection; import java.net.URL; import java.util.LinkedList; public class MenuFragment extends Fragment { protected ViewSwitcher mViewSwitcher; protected RecyclerView mList; protected OnDishSelectedListener mOnDishSelectedListener; protected int mTablePosition; public static MenuFragment newInstance(int tablePosition) { MenuFragment fragment = new MenuFragment(); Bundle arguments = new Bundle(); arguments.putSerializable(MyWaiterConstants.ARG_TABLE_POSITION, tablePosition); fragment.setArguments(arguments); return fragment; } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); if (getArguments() != null) { mTablePosition = getArguments().getInt(MyWaiterConstants.ARG_TABLE_POSITION); } } @Nullable @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { super.onCreateView(inflater, container, savedInstanceState); View root = inflater.inflate(R.layout.fragment_menu, container, false); mViewSwitcher = (ViewSwitcher) root.findViewById(R.id.view_switcher); mViewSwitcher.setInAnimation(getActivity(), android.R.anim.fade_in); mViewSwitcher.setOutAnimation(getActivity(), android.R.anim.fade_out); mList = (RecyclerView) root.findViewById(android.R.id.list); mList.setLayoutManager(new LinearLayoutManager(getActivity())); mList.setItemAnimator(new DefaultItemAnimator()); mList.setAdapter(new MenuRecyclerViewAdapter(getActivity(), mOnDishSelectedListener)); // Actualizo la interfaz con el modelo updateViews(); return root; } private void updateViews() { if (Restaurant.getInstance().getMenu() == null) { downloadMenu(); } else { mViewSwitcher.setDisplayedChild(MyWaiterConstants.MENU_VIEW_INDEX); mList.setAdapter(new MenuRecyclerViewAdapter(getActivity(), mOnDishSelectedListener)); } } private void downloadMenu() { AsyncTask<Void, Integer, LinkedList<Dish>> menuDownloader = new AsyncTask<Void, Integer, LinkedList<Dish>>() { @Override protected void onPreExecute() { super.onPreExecute(); mViewSwitcher.setDisplayedChild(MyWaiterConstants.LOADING_VIEW_INDEX); } @Override protected LinkedList<Dish> doInBackground(Void... voids) { try { URL url = new URL(MyWaiterConstants.URL_API_MENU); HttpURLConnection con = (HttpURLConnection) url.openConnection(); con.connect(); int responseLength = con.getContentLength(); byte data[] = new byte[1024]; long currentBytes = 0; int downloadedBytes; InputStream input = con.getInputStream(); StringBuilder sb = new StringBuilder(); while ((downloadedBytes = input.read(data)) != -1 && !isCancelled()) { sb.append(new String(data, 0, downloadedBytes)); } if (isCancelled()) { return null; } JSONObject jsonRoot = new JSONObject(sb.toString()); JSONArray jsonDishes = jsonRoot.getJSONArray(MyWaiterConstants.JSON_ATT_DATA); LinkedList<Dish> dishes = new LinkedList<>(); for (int i = 0; i < jsonDishes.length(); i++) { JSONObject jsonDish = jsonDishes.getJSONObject(i); String dishName = jsonDish.getString(MyWaiterConstants.JSON_ATT_NAME); String dishDescription = jsonDish.getString(MyWaiterConstants.JSON_ATT_DESCRIPTION); BigDecimal dishPrice = new BigDecimal(jsonDish.getString(MyWaiterConstants.JSON_ATT_PRICE)); String dishIcon = jsonDish.getString(MyWaiterConstants.JSON_ATT_ICON); int dishIconResource = -1; if (dishIcon.equals(MyWaiterConstants.CALDO_GALLEGO)) { dishIconResource = R.drawable.caldo_gallego; } else if (dishIcon.equals(MyWaiterConstants.CALLOS)) { dishIconResource = R.drawable.callos; } else if (dishIcon.equals(MyWaiterConstants.COCIDO_GALLEGO)) { dishIconResource = R.drawable.cocido_gallego; } else if (dishIcon.equals(MyWaiterConstants.EMPANADA)) { dishIconResource = R.drawable.empanada; } else if (dishIcon.equals(MyWaiterConstants.LACON_CON_GRELOS)) { dishIconResource = R.drawable.lacon_con_grelos; } else if (dishIcon.equals(MyWaiterConstants.PAN_GALLEGO)) { dishIconResource = R.drawable.pan_gallego; } else if (dishIcon.equals(MyWaiterConstants.PULPO_A_LA_GALLEGA)) { dishIconResource = R.drawable.pulpo_a_la_gallega; } LinkedList<Allergen> allergens = new LinkedList<>(); JSONArray jsonAllergens = jsonDish.getJSONArray(MyWaiterConstants.JSON_ATT_ALLERGENS); for (int j = 0; j < jsonAllergens.length(); j++) { JSONObject jsonAllergen = jsonAllergens.getJSONObject(j); String allergenName = jsonAllergen.getString(MyWaiterConstants.JSON_ATT_ALLERGEN_NAME); String allergenIcon = jsonAllergen.getString(MyWaiterConstants.JSON_ATT_ALLERGEN_ICON); int allergenIconResource = -1; if (allergenIcon.equals(MyWaiterConstants.ALLERGEN_ALTRAMUZ)) { allergenIconResource = R.drawable.allergen_altramuz; } else if (allergenIcon.equals(MyWaiterConstants.ALLERGEN_APIO)) { allergenIconResource = R.drawable.allergen_apio; } else if (allergenIcon.equals(MyWaiterConstants.ALLERGEN_CACAHUETES)) { allergenIconResource = R.drawable.allergen_cacahuetes; } else if (allergenIcon.equals(MyWaiterConstants.ALLERGEN_CEREAL_CON_GLUTEN)) { allergenIconResource = R.drawable.allergen_cereal_con_gluten; } else if (allergenIcon.equals(MyWaiterConstants.ALLERGEN_CRUSTACEOS)) { allergenIconResource = R.drawable.allergen_crustaceos; } else if (allergenIcon.equals(MyWaiterConstants.ALLERGEN_FRUTOS_SECOS)) { allergenIconResource = R.drawable.allergen_frutos_secos; } else if (allergenIcon.equals(MyWaiterConstants.ALLERGEN_HUEVOS)) { allergenIconResource = R.drawable.allergen_huevos; } else if (allergenIcon.equals(MyWaiterConstants.ALLERGEN_LACTEOS)) { allergenIconResource = R.drawable.allergen_lacteos; } else if (allergenIcon.equals(MyWaiterConstants.ALLERGEN_MOLUSCOS)) { allergenIconResource = R.drawable.allergen_moluscos; } else if (allergenIcon.equals(MyWaiterConstants.ALLERGEN_MOSTAZA)) { allergenIconResource = R.drawable.allergen_mostaza; } else if (allergenIcon.equals(MyWaiterConstants.ALLERGEN_PESCADO)) { allergenIconResource = R.drawable.allergen_pescado; } else if (allergenIcon.equals(MyWaiterConstants.ALLERGEN_SESAMO)) { allergenIconResource = R.drawable.allergen_sesamo; } else if (allergenIcon.equals(MyWaiterConstants.ALLERGEN_SOJA)) { allergenIconResource = R.drawable.allergen_soja; } else if (allergenIcon.equals(MyWaiterConstants.ALLERGEN_SULFITOS)) { allergenIconResource = R.drawable.allergen_sulfitos; } allergens.add(new Allergen(allergenName, allergenIconResource)); } dishes.add(new Dish(dishName, dishDescription, dishIconResource, dishPrice, allergens)); } return dishes; } catch (Exception ex) { ex.printStackTrace(); return null; } } @Override protected void onProgressUpdate(Integer... values) { super.onProgressUpdate(values); } @Override protected void onPostExecute(LinkedList<Dish> dishes) { super.onPostExecute(dishes); if (dishes != null) { Restaurant.getInstance().setMenu(dishes); // Actualizo la interfaz updateViews(); } else { // Ha habido algún error, se lo notificamos al usuario AlertDialog.Builder alertDialog = new AlertDialog.Builder(getActivity()); alertDialog.setTitle(R.string.error); alertDialog.setMessage(R.string.menu_download_error_msg); alertDialog.setPositiveButton(R.string.menu_retry_download_msg, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialogInterface, int i) { downloadMenu(); } }); alertDialog.show(); } } }; menuDownloader.execute(); } @Override public void onAttach(Context context) { super.onAttach(context); if (getActivity() instanceof OnDishSelectedListener) { mOnDishSelectedListener = (OnDishSelectedListener) getActivity(); } } @Override public void onAttach(Activity activity) { super.onAttach(activity); if (getActivity() instanceof OnDishSelectedListener) { mOnDishSelectedListener = (OnDishSelectedListener) getActivity(); } } @Override public void onDetach() { super.onDetach(); mOnDishSelectedListener = null; } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,384
{"url":"http:\/\/mathhelpforum.com\/math-challenge-problems\/41113-clocks-time.html","text":"1. ## Clocks - Time\n\nClock A,B and Cstrikes every hour. B slows down and take 2 minutes longer than A per hour while C becomes faster and takes a minute less than A per hour. If they strike together at 12 midnight, when will they strike together again...?\n\nans: 11 am\n\n2. Hello, MathLearner!\n\nClocks $A, B\\text{ and }C$ strike every hour.\n$B$ slows down and take 2 minutes longer than $A$ per hour\nwhile $C$ becomes faster and takes a minute less than $A$ per hour.\nIf they strike together at 12 midnight, when will they strike together again?\n\nAnswer: 11 am . . . . I don't agree!\nClock A strikes every 60 minutes.\nClock B strikes every 62 minutes.\nClock C strikes every 59 minutes.\n\nThe LCM of 60, 62, and 59 is: .109,740\n\n$109,\\!740\\text{ minutes} \\:=\\:1829\\text{ hours} \\:=\\:76\\text{ days, }{\\color{blue}5\\text{ hours}}$\n\n$\\text{They will strike together at }{\\color{red}5\\text{ am}}\\text{ (of the 77th day).}$\n\n3. I am not sure about this... In the paper i was solving, the right option given was for 11 am.. there was no option as 5 am...\n\n,\n\n,\n\n### clock A B and C strikes every hour B slows down and takes 2 min longer than A per hour while C takes 2 min less when will they strike together again\n\nClick on a term to search for related topics.","date":"2017-05-26 07:15:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 7, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.379336416721344, \"perplexity\": 1840.620328060504}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-22\/segments\/1495463608648.25\/warc\/CC-MAIN-20170526071051-20170526091051-00060.warc.gz\"}"}
null
null
Q: Forward Nagios server state to a second server Is there a way to forward the state of a Nagios server to a second Nagios server? I want to install a Nagios server which does the usual collecting of information about the machines and services on the network. This one should run in the local network, with unlimited access to the other machines. For the times when i'm not in the office, i would like to look at the Nagios state, with the web interface. But i don't want to allow connections from the outside to the local network. My idea is to have a second Nagios server, which is located outside of the office network (maybe in a DMZ), and that the main server sends his results of the checks to the outside server. This way there would be only an outgoing connection from the local network. The web access goes to the outside server. Is this possible with Nagios, or is another nice solution? A: What you're asking about is basically the classic distributed or failover monitoring setup. (These docs are from 2.x, but the idea is the same.) (Unfortunately, the old "redundant and failover monitoring" docs appear to be gone, replaced instead with solutions for Nagios XI.) The idea is to have one Nagios instance forward all of its check results to another server. You used to do this with an ocsp_command (and/or ochp_command) that would forward all check results to another server. The problem is that all of the hosts and services also have to be defined on the receiving end (as passive checks). This can be mitigated with config management tools. There are several more-modern options available now, like DNX and MNTOS, which are detailed in the new distributed monitoring docs. I refer you to the classic docs because these new tools might not work for you if you absolutely cannot have inbound traffic (for job submission).
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,148
\section{Introduction} In recent years, Computer aided diagnosis systems based on brain imaging have shown merits in the diagnosis of Parkinson's Disease(PD), with the objective to detect PD by automatic recognition of patterns that characterize it. PD is the second most common neurodegenerative disorder and the most common movement disorder affecting the elderly next to the Alzheimer's disease~\cite{mhyreboyd}. The root cause of PD is thought to be the loss of nerve cells (neurons) in Substantia Nigra part of Basal Ganglia, which is one of the major regions of the human brain located deep within the cerebrum, below cerebral cortex. Dopamine, the neurotransmitter of the brain, is produced in this region, which facilitates the communication between neurons. This communication is essential for coordination of body movement and a shortage of dopamine hampers it; leading to PD. PD has been associated with neurological symptoms like speech impediments, olfactory dysfunctions, sleep disorders, autonomic dysfunctions, fatigue and balance issues like tremors, Bradykinesia, postural instability, rigidity of the limbs, impaired gait etc. It has been clinically studied and defined for a long time, but the exact mechanisms leading to PD are still not properly identified~\cite{unclear_Cause}. In majority of the cases, PD is diagnosed with the manifestation of motor symptoms. But these symptoms might not become apparent until 50 to 70\% of neurons have been damaged~\cite{5070neurons}, which is too late for any sort of preventive measures. Even though a guaranteed cure for PD has not been discovered yet, early detection might offer an opportunity for slowing or stopping the progression of the disease. There are also new forms of treatment like Exenatide~\cite{exenatide}, which show promising results with cases where PD was detected in the initial stages. One of the techniques that has been found to be successful in detecting neurodegenerative diseases with cognitive impairments~\cite{medicalimaging1}~\cite{medicalimaging2} is the analysis of the structural changes in the brain using Medical Imaging techniques. Specifically, Magnetic Resonance (MR) images, which posses high contrast and resolution within soft tissue have been found to provide better performance in brain structure analysis. In this work, we propose three ensemble architectures to identify PD from MR images of the brain and analyze whether models pre-trained on unrelated Imagenet \cite{imagenet} dataset perform better than models without any prior training in detecting PD. We achieve over 90\% detection accuracy for all three of our architectures with the highest being 95.15\%, which is better than existing models using similar data. We also found that using models pretrained on the Imagenet dataset to construct our ensemble architecture yields much better performance than using untrained models, even though the training data is unrelated to PD. \section{Background and Related Works} A multitude of Machine Learning (ML)~\cite{unclear_Cause,relatedwork1,paz,relatedwork2} and Deep Learning (DL)~\cite{choi,relatedwork3} based approaches have been introduced for the detection of Parkinson's Disease. Focke et al.~\cite{relatedwork1} extracted Gray Matter (GM) and White Matter (WM) from MR images and fed them to a SVM Classifier for PD detection achieving 39.53\% and 41.86\% classification accuracy for GM and WM respectively. Radial Basis Function Neural Network (RBFNN) was used by Pazhanirajan et al.~\cite{paz} for PD classification. Babu et al.~\cite{babu} achieved a 87.21\% accuracy in classifying PD using GM with a Computer Aided Diagnosis (CAD) system. They identified Superior Temporal Gyrus as a potential biomarker which plays a vital role for PD. Choi et al.~\cite{choi} achieved an accuracy of 96\% using SPECT imaging with Convolutional Neural Network [CNN]. Although their accuracy was very high, SPECT Imaging is invasive and not very popular as it requires injecting a radioactive tracer into the patient. Around 100 times more MRI scans were performed compared to SPECT over one year period in the NHS operation in England. Thus the SPECT approach seems to be impractical for normal medical use due to limited sample size, despite its reported high accuracy. The dataset is also class imbalanced since about 69\% of the data is from PD patients. Class imbalance causes the models to over classify the majority class~\cite{classimbalance}. To detect PD from resting-state functional MRI (rsf-MRI), which detects subtle changes in blood oxygenation level, whereas Structural MRI (sMRI) only captures the anatomical details and ignores all activity, Long et al.~\cite{long} used a ML based approach and they achieved 87\% classification accuracy, but the dataset used by them was very small. Rana et al.~\cite{rana} used a SVM for classification with t-test feature selection on WM, GM and Cerebrospinal Fluid (CSF) achieving 86.67\% accuracy for GM and WM and 83.33\% accuracy for CSF. In another work~\cite{rana1}, the authors used the relation between tissues instead of considering the tissues separately and achieved an accuracy of 89.67\%. Among the various regions in the brain, the Substantia Nigra (SN) region has significant correlation with PD according to Braak's neuroanatomical model of Parkinson's Disease~\cite{braak} and it is often used as a Region Of Interest (ROI) in PD identification. But we will not be using this region of the brain for our analysis, instead we consider GM and WM only, since we found that the detection accuracy increased drastically using GM and WM in our previous work~\cite{firstpaper}. ImageNet~\cite{imagenet} is one of the well known image datasets for computer vision. It is unrelated to PD detection. ImageNet is organized according to the WordNet~\cite{wordnet,wordnet1} hierarchy. WordNet is one of the largest lexical database of English words with nouns, verb, adjectives etc organised into "synonym sets" or "synsets", which are sets of cognitive synonyms. A "synset" describes a meaningful concept with multiple words or word phrases. WordNet contains more than 100,000 synsets, with more than 80,000 being nouns. Currently ImageNet labels images using only the nouns from WordNet. Each node of WordNet hierarchy is represented by thousand image samples in ImageNet on average. The ImageNet Large Scale Visual Recognition Challenge(ILSVRC)~\cite{imagenetcontest} evaluates the performance of various algorithms for object detection and image classification on the ImageNet dataset. The challenge has 1000 object categories, with the categories containing both internal and leaf nodes of ImageNet, but they do not overlap. Fig.~\ref{imagenetimages} shows two sample images from the ImageNet dataset and their positions in the WordNet hierarchy. Kornblith et al.~\cite{transfer} proposed that models performing well on the ILSVRC also perform better when they are applied on other datasets. \begin{figure}[t] \centering \begin{tabular}{c} \includegraphics[width=0.45\textwidth]{lion.jpg}\\ \small{ (a) Animal-Beast-Chordate-Vertebrate-Mammal-Placental-}\\\small{Carnivore-Feline-Big Cat-Lion} \\ \includegraphics[width=0.45\textwidth]{racecar.jpg}\\ \small{ (b) Artifact-Instrumentation-Container-Wheeled Vehicle-}\\\small{Self Propelled Vehicle-Motor Vehicle-Car/Automobile-Race Car} \end{tabular} \caption{Sample Images from ImageNet~\cite{imagenet} dataset and their position in the WordNet~\cite{wordnet,wordnet1} Hierarchy}\label{imagenetimages} \end{figure} In our previous work~\cite{firstpaper}, we tried a combination of multiple neural networks trained on PD data to create ensemble architectures for detecting PD, as shown in Fig. ~\ref{oldarc}. In this work, we built ensemble architectures, composed of ILSVRC models pretrained using non-PD related ImageNet images. We then used MR images to validate the effectiveness of these architectures. We seperated WM and GM from MR scans of the brain and passed them through our architectures. Experimental results showed that when using GM and WM, ensemble architectures with models pretrained on ImageNet data achieve better performance in detecting PD. \begin{figure}[htbp] \centering \includegraphics[width=0.5\textwidth]{oldarc.png} \caption{Ensemble architecture from previous experiment~\cite{firstpaper}} \label{oldarc} \end{figure} The objectives of this paper are two-folds, (1) We want to know whether Kornblith's hypotheses and our observation from previous work holds true if we transfer those models learnt on completely unrelated data to detect PD. (2) The detection accuracy using sMRI was lacking in current methods and none of the methods we looked into were using ensemble architectures for PD detection. We want to demonstrate that creating ensemble architectures combining different characteristics of deep learning models has potential to give us better results. Our approach is described in the next section. \section{Proposed Method} \subsection{Data} We used Parkinson Progression Markers Initiative (PPMI) dataset~\cite{ppmi} for our experiments, which consists of T1-weighted sMRI scans for 568 PD and Healthy Control(HC) subjects. We only chose 445 subjects and discarded the rest due to structural anomalies during preprocessing steps. There was a class imbalance in the resulting data with 299 PD and 146 HC subjects. To balance the data, we collected 153 HC T1-weighted sMRI scans from the publicly available IXI dataset~\cite{ixi}. The final dataset was class balanced with 598 subjects. The demographic for the dataset is presented in Table~\ref{demotable}. \begin{table}[htbp] \centering \setlength{\tabcolsep}{5pt} \renewcommand{\arraystretch}{1} \caption{Demographic Data}\label{demotable} \begin{tabular}{l|c|c|c} & PD & HC & Average\\ \hline Age(Years) & $62.0 \pm 9.54$ & $49.2 \pm 16.9$ & $55.6 \pm 15.1$\\\hline Sex (Male / Female) & 189 / 110 & 172 / 127 & 361 / 237\\ \end{tabular} \end{table} \subsection{Preprocessing} The scans we had come from different machines, so there were dimensional and morphological disparities between the data. We had to standardize the data to a common format to make it comparable. All scans were resized to the same dimensions. For preprocessing, Statistical Parameter Mapping (SPM12)~\cite{spm,spm1} and Computational AnatomTtoolbox (CAT12)~\cite{cat12} were used. \begin{figure}[htbp] \centering \includegraphics[width=0.5\textwidth]{pipeline.png} \caption{Preprocessing Pipeline} \label{fig1} \end{figure} The structure of our preprocessing pipeline is presented in Fig. ~\ref{fig1}. Since MRI intensity varies from subject to subject, to minimize discrepancies we normalize the values to a [0,1] range. After that the images were aligned to a standard space named Montreal Neurological Institute (MNI) and general intensity non-uniformities are removed using bias field correction (FAST)~\cite{biasfield}. FNIRT / BET~\cite{brainextract} was used to extract brain from the scans. The skull, fat and background regions which do not contain useful information were removed. The data was registered to MNI152 format (FLIRT)~\cite{registration,registration1}. After that artifact removal was performed, i.e. any voxel intensity values higher than 1 was corrected to be in the range [0,1]. Then a deformation method was applied to extract Gray Matter (GM) and White Matter (WM) from the scan and a 8mm Isotropic Gaussian Kernel was used to smooth and increase the signal-to-noise ratio and remove unnecessary portions of the scan. Finally we have three separate datasets: whole brain scans, GM and WM extracted from the brain and Smoothed GM and WM. Examples of the extracted brain and the resultant WM and GM extracted from the brain are given in Fig. ~\ref{fig2}. For our experiments, we did not use the whole brain scans, which produced less promising results based on our earlier analysis~\cite{firstpaper}. Our models were designed to handle the extracted WM and GM scans and the smoothed version of WM and GM scans. \begin{figure}[!t] \centering \begin{tabular}{r} \includegraphics[width=0.45\textwidth]{whole_brain_Control.jpg}\\ \multicolumn{1}{c}{\small (a) Whole Brain} \\ \includegraphics[width=0.45\textwidth]{wm_Control.jpg}\\ \multicolumn{1}{c}{\small (b) Extracted White Matter}\\ \includegraphics[width=0.45\textwidth]{gm_Control.jpg}\\ \multicolumn{1}{c}{\small (c) Extracted Gray Matter} {} \end{tabular} \caption{Sample MRI scans for a Healthy Control Patient and the extracted GM and WM}\label{fig2} \end{figure} \subsection{Model Architecture} We created three separate ensemble architectures combining existing model architectures of the ILSVRC~\cite{imagenetcontest} implemented in Pytorch \cite{pytorch}. We selected 6 models to construct our ensemble architectures and choose the three architectures with better performances: \begin{itemize} \item ResNet 101~\cite{resnet} \item SqueezeNet 1.1~\cite{squeezenet} \item DenseNet 201~\cite{densenet} \item VGG 19~\cite{vgg} \item MobileNet V2~\cite{mobilenet} \item ShuffleNet V2~\cite{shufflenet} \end{itemize} These models are available from Torchvision \cite{torchvision} in two versions: without any kind of training (untrained) and trained on the ImageNet dataset. We used both untrained and pretrained models to construct our ensemble networks and compared the performances of the resultant architectures to examine if training on the unrelated Imagenet dataset makes the models perform better in PD identification. Since the models were designed with the ImageNet dataset in mind, we had to modify the models in order to accommodate the format of MRI data. The input layers of all models were changed to accommodate the format of our input and the output layers were changed to predict between 2 classes (PD and HC) instead of the 1000 ImageNet classes. \subsection{Core Architecture} For our main architecture, we take the extracted GM and WM scans dimension $121 \times 145 \times 121$ and pass them in parallel through two blocks comprised of multiple ILSVRC models as described above. We refer to this combination of models as a Model Block. We then concatenate the output from both blocks and then pass them through a ReLU activation layer and then through a final linear layer which predicts between the two output classes. Fig. ~\ref{core} shows a visual representation of this architecture. \begin{figure}[htbp] \centering \includegraphics[width=.5\textwidth]{core.png} \medbreak \caption{Core Architecture}\label{core} \end{figure} We used multiple combinations of models to create different versions of Model Block, replaced them in the core architecture and tested their performances with both pretrained and untrained models. \subsubsection{Architecture 1} The model block was comprised of DenseNet, ShuffleNet and SqueezeNet in parallel. The input was passed through all three models simultaneously, as shown in Fig.~\ref{block1}. \begin{figure}[htbp] \centering \includegraphics[width=.5\textwidth]{block1.png} \medbreak \caption{Architecture 1 Model Block}\label{block1} \includegraphics[width=.5\textwidth]{block2.png} \medbreak \caption{Architecture 2 Model Block}\label{block2} \includegraphics[width=.5\textwidth]{block3.png} \medbreak \caption{Architecture 3 Model Block}\label{block3} \end{figure} \subsubsection{Architecture 2} The model block was created by adding MobileNet to Block 1, so it was comprised of DenseNet, ShuffleNet, SqueezeNet and MobileNet in parallel. The input was passed through all four models simultaneously, as shown in Fig.~\ref{block2}. \subsubsection{Architecture 3} The model block was created with ShuffleNet, VGG and MobileNet in parallel. The input was passed through all three models simultaneously, as shown in Fig.~\ref{block3} \section{Experimental Results} We created 2 separate versions for each of our ensemble architectures; one with all untrained constituent models and another with all pretrained constituent models. The dataset was randomly split and 80\% was selected for training and 20\% for testing. Each model was trained for 25 epochs with an Adam Optimizer. We used Cross Entropy Loss function. At each epoch, the training set was further split randomly and 20\% was selected for validation. We used a learning rate of \textbf{.001} and \textbf{.0001}. We repeated the process multiple times and the average results are presented in Table~\ref{resultstable} along with the results of some other approaches on similar data. \begin{table*}[htbp] \centering \setlength{\textwidth}{12pt} \renewcommand{\arraystretch}{1.5} \caption{Results}\label{resultstable} \begin{tabular}{@{\extracolsep{\fill}} c | c | c | c | c } \thead{Model} & \thead{Use \\Smoothed \\ Scan} & \thead{Pre Trained} & \thead{Learning Rate} & \thead{Classification \\Accuracy\\(On a scale of 0-1)}\\ \hline \multirow{8}{*}{\makecell{Architecture 1}}& \multirow{4}{*}{False} &\multirow{2}{*}{False} & .001 & $0.6045$\\\cline{4-5} & & &.0001 & $0.8097$\\\cline{3-5} & &\multirow{2}{*}{True} &.001 & $\textbf{0.9291}$\\\cline{4-5} & & &.0001 & $\textbf{0.8955}$\\ \cline{2-5} & \multirow{4}{*}{True} &\multirow{2}{*}{False} & .001 & $0.7303$\\\cline{4-5} & & &.0001 & $0.6592$\\\cline{3-5} & &\multirow{2}{*}{True} &.001 & $\textbf{0.8315}$\\\cline{4-5} & & &.0001 & $\textbf{0.7528}$\\\hline \multirow{8}{*}{\makecell{Architecture 2}}& \multirow{4}{*}{False} &\multirow{2}{*}{False} & .001 & $0.5485$\\\cline{4-5} & & &.0001 & $0.7276$\\\cline{3-5} & &\multirow{2}{*}{True} &.001 & $\textbf{0.9515}$\\\cline{4-5} & & &.0001 & $\textbf{0.9440}$\\ \cline{2-5} & \multirow{4}{*}{True} &\multirow{2}{*}{False} & .001 & $0.5768$\\\cline{4-5} & & &.0001 & $0.7416$\\\cline{3-5} & &\multirow{2}{*}{True} &.001 & $\textbf{0.8614}$\\\cline{4-5} & & &.0001 & $\textbf{0.8352}$\\\hline \multirow{8}{*}{\makecell{Architecture 3}}& \multirow{4}{*}{False} &\multirow{2}{*}{False} & .001 & $0.5187$\\\cline{4-5} & & &.0001 & $0.5522$\\\cline{3-5} & &\multirow{2}{*}{True} &.001 & $\textbf{0.9029}$\\\cline{4-5} & & &.0001 & $\textbf{0.9254}$\\ \cline{2-5} & \multirow{4}{*}{True} &\multirow{2}{*}{False} & .001 & $0.5805$\\\cline{4-5} & & &.0001 & $0.6105$\\\cline{3-5} & &\multirow{2}{*}{True} &.001 & $\textbf{0.8914}$\\\cline{4-5} & & &.0001 & $\textbf{0.8390}$\\\hline \multirow{8}{*}{\makecell{Our Previous Result ~\cite{firstpaper} \\ using extracted GM and WM \\Scans}}& \multirow{4}{*}{False} &\multirow{2}{*}{False} & .001 & $0.5487 \pm 0.0002$\\\cline{4-5} & & &.0001 & $0.6847 \pm 0.0093$\\\cline{3-5} & &\multirow{2}{*}{True} &.001 & $\textbf{0.9231} \pm \textbf{0.0258}$\\\cline{4-5} & & &.0001 & $\textbf{0.9366} \pm \textbf{0.0170}$\\ \cline{2-5} & \multirow{4}{*}{True} &\multirow{2}{*}{False} & .001 & $0.5410 \pm 0.0106$\\\cline{4-5} & & &.0001 & $0.7276 \pm 0.0476$\\\cline{3-5} & &\multirow{2}{*}{True} &.001 & $ \textbf{0.9291} \pm\textbf{ 0.0170}$\\\cline{4-5} & & &.0001 & $\textbf{0.9470} \pm \textbf{0.0083}$\\\hline Focke et al.\cite{relatedwork1} [GM]&N/A&N/A&N/A&0.3953\\\hline Focke et al.\cite{relatedwork1} [WM]&N/A&N/A&N/A&0.4186\\\hline Babu et al.\cite{babu} [GM]&N/A&N/A&N/A&0.8721\\\hline Rana et al.\cite{rana} [GM \& WM]&N/A&N/A&N/A&0.8667\\\hline Rana et al.\cite{rana1}&N/A&N/A&N/A&0.8967\\\hline \end{tabular} \end{table*} For all three different architectures, we can see that the accuracy increases significantly when we use pretrained models to construct the model blocks. For Architecture 1, the accuracy we achieved was $0.9291$ with non smoothed scans with a learning rate of $0.001$. For Architecture 2, we achieved a detection accuracy of $0.9515$ for non smooth scans with $0.001$ learning rate, which was the best overall accuracy we achieved in our experiments. Architecture 3 achieved an accuracy of $0.9254$ for non smooth scans and a learning rate of $0.0001$. All three of our architectures perform better than existing models on similar data and achieve above $90\%$ accuracy. For comparison, we also present our results from our earlier finding~\cite{firstpaper} where pretrained models provide higher detection accuracy as well. We note that we achieved better accuracy than our previous work with an architecture constructed with pretrained models and non smoothed scans. It is also noted that the smoothing process actually caused reduced detection accuracy in our experiments with these architectures. \section{Conclusion and Future Works} In this paper, we proposed three novel Ensemble architectures for Parkinson's Disease Detection, which outperform related works on similar dataset and achieve considerable detection accuracy of around 95.15\%. We also find that, when we used models pretrained on unrelated ImageNet dataset for the construction of the ensemble architectures, it significantly enhanced the performance on detecting PD compared to untrained models. Our finding suggests a promising direction, where unrelated training data can be considered when insufficient or no training data is available for a particular application. Our next step will be to analyse the decision making process of our model, perform occlusion analysis to see which region of the scans our models use to make the decisions and use the Substantia Nigra as the target of significance to assess if it produces better results than using GM and WM regions. \section*{Acknowledgment} Special thanks to PPMI for supporting Parkinson's disease research by maintaining and updating their clinical dataset. Data used in the preparation of this article were obtained from the Parkinsons Progression Markers Initiative (PPMI) database (www.ppmi-info.org/data). For up-to-date information on the study, visit www.ppmi-info.org. PPMI is a public-private partnership funded by the Michael J. Fox Foundation for Parkinsons Research and other funding partners listed at www.ppmi-info.org/fundingpartners. Special thanks for the advice from Dr. Sara Soltaninejad (soltanin@ualberta.ca), previous PhD student at the Department of Computing Science, University of Alberta. Financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC) is gratefully acknowledged. \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
9,351
Q: Error Showing when validating my website There is no attribute "height" padding="0" cellspacing="0" border="0" align="center" width="969" height="180"> I am getting this problem in the tpl files of my xcart version 4.4.2. How to resolve this problem? Please help me out. A: This specifically? Delete the height attribute. It doesn't exist in the language you are using. In general? Use CSS for presentation, not HTML.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,499
{"url":"https:\/\/tug.org\/pipermail\/tex-live\/2009-June\/021128.html","text":"# [tex-live] Three problems with milstd.tex\n\nKarl Berry karl at freefriends.org\nWed Jun 24 02:56:51 CEST 2009\n\n``` > In Tex Live 2008, milstd.tex appears under the \"doc\" tree:\n> texmf-dist\/doc\/latex\/logic\/milstd.tex\n\nMoved.\n\n-- tidied the line endings (we do that whenever we spot the problem)\n-- removed the ctrl-z lines\n\nThanks x 2.\n\nTL can't use symlinks except in the bin dirs.\n\nThe way to fix this is to create milstd.sty that does the \\input of\nmilstd.tex, or just rename the file since it uses \\newfont and hence is\na LaTeX file (for no good reason, so it'd be better to use \\font and\nmake it generic ... anyway ...). In principle this stuff should go back\nto the original author, I guess. I didn't look into whether Rick\nSimpson is contactable these days. Be good to explicitly state the","date":"2022-01-27 20:03:35","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9504221081733704, \"perplexity\": 6045.457873405223}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320305288.57\/warc\/CC-MAIN-20220127193303-20220127223303-00228.warc.gz\"}"}
null
null
{"url":"https:\/\/gmatclub.com\/forum\/if-j-and-k-are-even-integers-and-j-k-which-of-the-following-equals-265192.html","text":"GMAT Question of the Day - Daily to your Mailbox; hard ones only\n\n It is currently 15 Dec 2018, 16:06\n\n### GMAT Club Daily Prep\n\n#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.\n\nCustomized\nfor You\n\nwe will pick new questions that match your level based on your Timer History\n\nTrack\n\nevery week, we\u2019ll send you an estimated GMAT score based on your performance\n\nPractice\nPays\n\nwe will pick new questions that match your level based on your Timer History\n\n## Events & Promotions\n\n###### Events & Promotions in December\nPrevNext\nSuMoTuWeThFrSa\n2526272829301\n2345678\n9101112131415\n16171819202122\n23242526272829\n303112345\nOpen Detailed Calendar\n\u2022 ### FREE Quant Workshop by e-GMAT!\n\nDecember 16, 2018\n\nDecember 16, 2018\n\n07:00 AM PST\n\n09:00 AM PST\n\nGet personalized insights on how to achieve your Target Quant Score.\n\n# If j and k are even integers and j < k, which of the following equals\n\nAuthor Message\nTAGS:\n\n### Hide Tags\n\nMath Expert\nJoined: 02 Sep 2009\nPosts: 51218\nIf j and k are even integers and j < k, which of the following equals\u00a0 [#permalink]\n\n### Show Tags\n\n09 May 2018, 00:36\n00:00\n\nDifficulty:\n\n35% (medium)\n\nQuestion Stats:\n\n66% (01:11) correct 34% (01:07) wrong based on 203 sessions\n\n### HideShow timer Statistics\n\nIf j and k are even integers and j < k, which of the following equals the number of even integers that are greater than j and less than k ?\n\nA. $$\\frac{(k -j -2)}{2}$$\n\nB. $$\\frac{(k -j -1)}{2}$$\n\nC. $$\\frac{(k -j )}{2}$$\n\nD. $$k -j$$\n\nE. $$k -j -1$$\n\n_________________\nDirector\nJoined: 31 Oct 2013\nPosts: 883\nConcentration: Accounting, Finance\nGPA: 3.68\nWE: Analyst (Accounting)\nRe: If j and k are even integers and j < k, which of the following equals\u00a0 [#permalink]\n\n### Show Tags\n\n09 May 2018, 01:12\nBunuel wrote:\nIf j and k are even integers and j < k, which of the following equals the number of even integers that are greater than j and less than k ?\n\nA. $$\\frac{(k -j -2)}{2}$$\n\nB. $$\\frac{(k -j -1)}{2}$$\n\nC. $$\\frac{(k -j )}{2}$$\n\nD. $$k -j$$\n\nE. $$k -j -1$$\n\n1. k and j are both even integers.\n2. j<k\n\nlet's assume that j=2 and k=20\n\nso, the even integers that are greater than j and less than k are : 4 6 8 10 12 14 16 18 . In total we have 8 integers.\n\ncheck option A. this the option that meets our demand.\n\nA) (20-2-2)\/2 = 8\n\nsomeone can take other range of vales such as k= 10 , j=4. or k=16, j= 6.\n\nThus option A is the best answer.\nIntern\nJoined: 19 Apr 2017\nPosts: 35\nLocation: India\nWE: Management Consulting (Consulting)\nIf j and k are even integers and j < k, which of the following equals\u00a0 [#permalink]\n\n### Show Tags\n\n18 Aug 2018, 10:12\nGreater than J less than K means all even numbers in the range except j and k. Hence, minus 2, and even so divided by 2. I realised after clicking the wrong answer.\nSenior SC Moderator\nJoined: 22 May 2016\nPosts: 2211\nIf j and k are even integers and j < k, which of the following equals\u00a0 [#permalink]\n\n### Show Tags\n\n19 Aug 2018, 07:42\n1\nBunuel wrote:\nIf j and k are even integers and j < k, which of the following equals the number of even integers that are greater than j and less than k ?\n\nA. $$\\frac{(k -j -2)}{2}$$\n\nB. $$\\frac{(k -j -1)}{2}$$\n\nC. $$\\frac{(k -j )}{2}$$\n\nD. $$k -j$$\n\nE. $$k -j -1$$\n\nAssign values. Let $$j=4$$ and $$k=12$$\n\n4, 6, 8, 10, 12, so\nThe # of even integers between $$j$$ and $$k$$ = 3\n\nUsing $$k=12$$ and $$j=4$$, find the answer that yields $$3$$\n\nEliminate D and E immediately. Too great.\n\nA. $$\\frac{(k -j -2)}{2}=\\frac{(12-4-2)}{2}=\\frac{6}{2}=3$$ KEEP\n\nB. $$\\frac{(k -j -1)}{2}=\\frac{(12 -4 -1)}{2}=\\frac{7}{2}$$ REJECT\n\nC. $$\\frac{(k-j)}{2}=\\frac{(12 -4 )}{2}=\\frac{8}{2}=4$$ REJECT\n\nManager\nJoined: 20 Jul 2018\nPosts: 89\nGPA: 2.87\nRe: If j and k are even integers and j < k, which of the following equals\u00a0 [#permalink]\n\n### Show Tags\n\n19 Aug 2018, 07:56\n1\nlets assume j=2, k=10\nthere are 3 even integers between j=2 and k=10.\nnow test each option to get 3.\n\nA) 6\/3 = 3\nB) 7\/2 =not integer\nC) 8\/2 = 4\nD) 10-2 = 8\nE) 10-2-1 = 7\n\n_________________\n\nHasnain Afzal\n\n\"When you wanna succeed as bad as you wanna breathe, then you will be successful.\" -Eric Thomas\n\nIntern\nJoined: 25 Feb 2016\nPosts: 11\nRe: If j and k are even integers and j < k, which of the following equals\u00a0 [#permalink]\n\n### Show Tags\n\n23 Aug 2018, 01:10\nHey guys\n\nHow do I know which values to choose for substitution because I tried j=2 and k=4 and couldn't get the right answer.\nSenior SC Moderator\nJoined: 22 May 2016\nPosts: 2211\nIf j and k are even integers and j < k, which of the following equals\u00a0 [#permalink]\n\n### Show Tags\n\n23 Aug 2018, 09:05\n1\nBunuel wrote:\nIf j and k are even integers and j < k, which of the following equals the NUMBER OF [how many] even integers that are greater than j and less than k ?\n\nA. $$\\frac{(k -j -2)}{2}$$\n\nB. $$\\frac{(k -j -1)}{2}$$\n\nC. $$\\frac{(k -j )}{2}$$\n\nD. $$k -j$$\n\nE. $$k -j -1$$\n\nonyx12102 wrote:\nHey guys\n\nHow do I know which values to choose for substitution because I tried j=2 and k=4 and couldn't get the right answer.\n\nonyx12102 , whoops! I think you misread the question. Easy mistake. I can't tell which part of the prompt you misread.\n\nSee highlight. Meaning: How many other even integers are between one even integer (\"$$j$$\") and another even integer that is greater than j, i.e. \"$$k$$\"?\n\nI believe in whatever works. Maybe a number line?\n\nEven integers:\n<--(-2)---0---2---4---6---8---10--->\n\n$$j$$ is one of those integers. So too is $$k$$. And k > j\n\nYou picked j = 2, k = 4\n\n<-(-2)---0---2---4---6---8---10--->\n\nHow many EVEN integers are greater than $$j$$ and less than $$k$$ (between)? None.\n\nSo to choose values for j and k:\n1) choose something small for j (not 0, tho' it works) and something greater for k;\n2) Put some distance between j and k. You need the quantity of OTHER even integers before you get to the answer choices\n\nTry j = 2, k = 14\n<--0---2---{4---6---8---10---12}---14-->\n\nWHICH even integers are greater than j and smaller than k\n(Identify them): {4, 6, 8, 10, 12}\n\nHow many? (Count them.) There are 5\nNow plug in.\n-- Use k = 14, j = 2\nThe set {4, 6, 8, 10, 12} has 5 even integers\n\nHope that helps.\n\n*There are 5 even integers that are greater than J and smaller than k.\nIntern\nJoined: 25 Feb 2016\nPosts: 11\nRe: If j and k are even integers and j < k, which of the following equals\u00a0 [#permalink]\n\n### Show Tags\n\n23 Aug 2018, 22:38\nBunuel\n\nThank you so much that really helps\nTarget Test Prep Representative\nStatus: Founder & CEO\nAffiliations: Target Test Prep\nJoined: 14 Oct 2015\nPosts: 4295\nLocation: United States (CA)\nRe: If j and k are even integers and j < k, which of the following equals\u00a0 [#permalink]\n\n### Show Tags\n\n26 Aug 2018, 18:13\nBunuel wrote:\nIf j and k are even integers and j < k, which of the following equals the number of even integers that are greater than j and less than k ?\n\nA. $$\\frac{(k -j -2)}{2}$$\n\nB. $$\\frac{(k -j -1)}{2}$$\n\nC. $$\\frac{(k -j )}{2}$$\n\nD. $$k -j$$\n\nE. $$k -j -1$$\n\nWe can let j = 0 and k = 4, we see that there is 1 even integer, namely 2, that is greater than j and less than k.\n\nSince (4 - 0 - 2)\/2 = 1, answer A is correct.\n\n_________________\n\nScott Woodbury-Stewart\nFounder and CEO\n\nGMAT Quant Self-Study Course\n500+ lessons 3000+ practice problems 800+ HD solutions\n\nIntern\nJoined: 14 Feb 2018\nPosts: 15\nRe: If j and k are even integers and j < k, which of the following equals\u00a0 [#permalink]\n\n### Show Tags\n\n09 Dec 2018, 14:09\ngeneris wrote:\nBunuel wrote:\nIf j and k are even integers and j < k, which of the following equals the NUMBER OF [how many] even integers that are greater than j and less than k ?\n\nA. $$\\frac{(k -j -2)}{2}$$\n\nB. $$\\frac{(k -j -1)}{2}$$\n\nC. $$\\frac{(k -j )}{2}$$\n\nD. $$k -j$$\n\nE. $$k -j -1$$\n\nonyx12102 wrote:\nHey guys\n\nHow do I know which values to choose for substitution because I tried j=2 and k=4 and couldn't get the right answer.\n\nonyx12102 , whoops! I think you misread the question. Easy mistake. I can't tell which part of the prompt you misread.\n\nSee highlight. Meaning: How many other even integers are between one even integer (\"$$j$$\") and another even integer that is greater than j, i.e. \"$$k$$\"?\n\nI believe in whatever works. Maybe a number line?\n\nEven integers:\n<--(-2)---0---2---4---6---8---10--->\n\n$$j$$ is one of those integers. So too is $$k$$. And k > j\n\nYou picked j = 2, k = 4\n\n<-(-2)---0---2---4---6---8---10--->\n\nHow many EVEN integers are greater than $$j$$ and less than $$k$$ (between)? None.\n\nSo to choose values for j and k:\n1) choose something small for j (not 0, tho' it works) and something greater for k;\n2) Put some distance between j and k. You need the quantity of OTHER even integers before you get to the answer choices\n\nTry j = 2, k = 14\n<--0---2---{4---6---8---10---12}---14-->\n\nWHICH even integers are greater than j and smaller than k\n(Identify them): {4, 6, 8, 10, 12}\n\nHow many? (Count them.) There are 5\nNow plug in.\n-- Use k = 14, j = 2\nThe set {4, 6, 8, 10, 12} has 5 even integers\n\nHope that helps.\n\n*There are 5 even integers that are greater than J and smaller than k.\n\nI also misread the prompt and chose the wrong numbers to test with. Any suggestions on how to get better at picking smart numbers?\nRe: If j and k are even integers and j < k, which of the following equals &nbs [#permalink] 09 Dec 2018, 14:09\nDisplay posts from previous: Sort by","date":"2018-12-16 00:06:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8489590287208557, \"perplexity\": 2046.1857927996946}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-51\/segments\/1544376827137.61\/warc\/CC-MAIN-20181215222234-20181216004234-00087.warc.gz\"}"}
null
null
Q: Has Chris Nolan commented officially on the theme of Dark Knight Rises? Chris Nolan has previously stated that the main theme of Batman Begins is Fear, and the main theme of The Dark Knight is Chaos. Has he officially stated what the theme of Dark Knight Rises is? iMDB.com states in the Trivia section for DKR that the theme of the movie is Pain, but I didn't see that in the actual movie as much as Fear was in BB or Chaos in TDK. Honestly, the main theme I noticed throughout the movie was Despair. Was the trivia on iMDB right, and if so, can someone please provide a link that verifies it? iMDB is usually fairly accurate, but it does suffer from wiki-rot, especially on newer movies. If iMDB was wrong, has Chris Nolan said what the theme really is? A: According to digitalspy.ca, "the theme of Dark Knight Rises is consequences". When asked if pain could be a motif, he said yes, but then added that, in the film, "There's a lot about the consequences of actions".
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,084
#ifndef __CC_PU_PARTICLE_3D_VELOCITY_MATCHING_AFFECTOR_H__ #define __CC_PU_PARTICLE_3D_VELOCITY_MATCHING_AFFECTOR_H__ #include "CCPUAffector.h" namespace pola { namespace graphic { class PUVelocityMatchingAffector : public PUAffector { public: // Constants static const float DEFAULT_RADIUS; static PUVelocityMatchingAffector* create(); virtual void updatePUAffector(PUParticle3D *particle, float deltaTime) override; /** Todo */ float getRadius(void) const; /** Todo */ void setRadius(float radius); virtual void copyAttributesTo (PUAffector* affector) override; /** @copydoc ParticleAffector::_prepare */ //virtual void _prepare(ParticleTechnique* particleTechnique); /** @copydoc ParticleAffector::_unprepare */ //virtual void _unprepare(ParticleTechnique* particleTechnique); public: PUVelocityMatchingAffector(void); virtual ~PUVelocityMatchingAffector(void); protected: float _radius; }; } /* namespace graphic */ } /* namespace pola */ #endif
{ "redpajama_set_name": "RedPajamaGithub" }
5,180
Cough Drops - Oral Anesthetic Sore Throat - Oral Anesthetic Throat Drops Usage Occasions Cough and Sore Throat Relief From Herb to Drop Herb Mixture Our 10 Swiss Herbs Linden Flowers Wild Thyme World of Ricola Ricola Tours Ricola as Your Employer We Are Ricola Laufen, 24.05.2016 Ricola steadily growing for 85 years Laufen, May 24, 2016 – Swiss family-run company Ricola recorded net sales of CHF 294.7 million in 2015. As a result of the performance of the Swiss franc, this constituted a slight decline from the previous year, but is equivalent to a pleasing increase of 2.4% when adjusted for currency effects. The sudden appreciation of the Swiss franc against other currencies following the Swiss National Bank's decision at the start of the year to abandon the minimum euro exchange rate also affected Ricola. "As a heavily export-oriented company, Ricola generates over 90% of its total sales abroad. We were therefore significantly impacted by the shock appreciation of the franc, but we were not unprepared," sums up Felix Richterich, Chief Executive Officer and Chairman of the Board of Directors of Ricola. Group sales in 2015 amounted to CHF 294.7 million. Deducting the share of sales generated by Disch Ltd., which was sold in May 2015 for strategic reasons, this represented a decrease in Swiss francs of 2.1% from the previous year as a result of the adverse currency situation. In local currencies, however, net sales increased by 2.4%. The positive overall result was primarily attributable to the very good performance in North America and the expansion of the Asian market. As a consequence of the continuously positive development of business in the USA, the promising market expansion in Mexico and the founding of a subsidiary in Canada, the significance of the North America region for Ricola continued to grow. Also of increasing strategic importance to the company is the Asian market, where further encouraging progress was made in 2015, such as in China. Nevertheless, the European market remains the company's key region, and here, too, it was able to increase market share despite the challenging exchange rates. Proven products and innovative strength provide recipe for success 2015 also marked two anniversaries for the Swiss herb drop manufacturer: 85 years ago, Emil Richterich founded the family company in Laufen, and ten years later, he created the original herb drop. This traditional core product of Ricola continues to be its best-seller, and as part of its 75-year anniversary, it was celebrated in novel fashion. Alongside its tried-and-tested classics, Ricola also committed itself to sustainable innovation with new products in 2015, with the launch of the new Ricola Glacier Mint flavor in Western Europe and Switzerland proving very successful. In fact, no other new product on the Swiss candy market has performed nearly as strongly in recent years, and Glacier Mint is already one of the most popular Ricola flavors. Outlook for 2016: promising product innovations in the pipeline Ricola will continue to rely on internationalization and innovation as strategic pillars for success in the current year. There are some extremely promising product developments in the pipeline, which are expected to help strengthen the company's presence in international markets. At the same time, the site in Laufen and the sense of Swissness will also be reinforced. The Kräuterzentrum opened in Laufen in 2014 has proved highly valuable, while significant investments have also been made in production facilities over the past year. "We are proud that we have been able to strengthen the site in Laufen in spite of the difficult currency situation, and are confident that political and economic conditions will enable us to continue investing in Switzerland in the future," affirms Felix Richterich. About Ricola Ricola is one of the world's most modern and innovative manufacturers of herb drops. Ricola herb specialties are exported to more than 50 countries and are famous for their fine Swiss quality. Founded in 1930, with company headquarters in Laufen and subsidiaries in Europe, Asia and the USA, Ricola now produces around 60 different herb drops and tea specialties. Group sales amounted to CHF 294.7 million at the end of 2015 (excluding Disch Ltd. for the first time). In Switzerland, this family-owned company is a pioneer in herb cultivation and places great value on using carefully selected locations and controlled, environmentally friendly cultivation methods without the use of pesticides and herbicides. Ricola has concluded fixed long-term purchase agreements with more than 100 farmers in Swiss mountain regions. Ricola is a responsible employer of more than 400 employees and is committed to sustainable corporate management: economically, socially and ecologically. The traditional values of a family-run enterprise coupled with Swiss quality and a passion for innovation are crucial factors in the success of the Ricola global brand. Ricola Ltd. Nadja Lutz E-Mail: media@ricola.com Discover the world of Ricola Follow Ricola
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,465
{"url":"https:\/\/zbmath.org\/?q=an%3A0771.34031","text":"## Periodic and homoclinic solutions to a class of Hamiltonian systems with the potentials changing sign.(English)Zbl\u00a00771.34031\n\nThis paper considers the Hamiltonian system $$(*)$$ $$\\ddot q=B(t)q+b(t)W'(q)=0$$ $$(q\\in R^ N)$$ with the following assumptions: $$(\\text{V}_ 1)$$ $$B$$ is a continuous, $$T$$-periodic, positive definite and symmetric matrix valued function; $$(\\text{V}_ 2)$$ $$b$$ is a continuous $$T$$-periodic real function, and there exist $$t_ 1$$, $$t_ 2$$ such that $$b(t_ 1)>0$$ and $$b(t_ 2)<0$$; $$(\\text{V}_ 3)$$ $$W\\in C'(R^ N,R)$$, $$W(x)>0$$ $$\\forall x\\neq 0$$ and there exists $$\\mu>2$$ such that $$W(\\lambda x)=\\lambda^ \\mu W(x)$$ $$\\forall\\lambda\\geq 0$$ and $$x\\in R^ N$$. The main results consist of the following theorems: 1) Assume $$(\\text{V}_ 1)-(\\text{V}_ 2)-(\\text{V}_ 3)$$. Then $$(*)$$ has at least one nonconstant $$T$$-periodic solution; 2) Assume $$(\\text{V}_ 1)- (\\text{V}_ 2)-(\\text{V}_ 3)$$, and moreover, $$W\\in C^ 2(R^ N,R)$$ is even. Then $$(*)$$ has infinitely many (pairs of) nonconstant $$T$$- periodic solutions; 3) Assume $$(\\text{V}_ 1)-(\\text{V}_ 2)- (\\text{V}_ 3)$$. Then $$(*)$$ possesses a homoclinic solution $$q(t)$$, emanating from zero such that $$q\\in W^{1,2}(R^ N,R)$$.\n\n### MSC:\n\n 34C25 Periodic solutions to ordinary differential equations 34C37 Homoclinic and heteroclinic solutions to ordinary differential equations\n\n### Keywords:\n\nperiodic solution; Hamiltonian system; homoclinic solution","date":"2022-09-25 01:38:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9456549286842346, \"perplexity\": 277.25380719139025}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030334332.96\/warc\/CC-MAIN-20220925004536-20220925034536-00660.warc.gz\"}"}
null
null
Home » ResortsCasino Sportsbook Resorts Casino Sportsbook Resorts Casino went live with its online site in 2015, and it has quickly turned into one of the biggest and best operators in the state of New Jersey. Resorts Online Casino has over 550 games available to its customers, and it can be accessed both online and through a mobile app. Promo Code: NJOG Overall Ranking Code: NJOG Resorts Online Casino Offers Over 550 Games Loyalty Rewards Program That is Highly Rated Withdrawal Process Relatively Easy Seven Different Deposit Methods There are some other terrific online casinos in the state, but Resorts Casino continues to rank with the best of them. Take a closer look at our Resorts Casino review before deciding if this is the right online casino for all of your gambling needs. Resorts Casino Promo Code And Bonus Resorts Online Casino ran a promotion through February that matched your first real-money deposit 100 percent. All you had to do was make a deposit of at least $10, and it matched your deposit up to $1,000. Promo Code: NJOG gave you access to this great promotion, but there are certainly going to be more such promotions on the way in March. Resorts Casino also offers a loyalty rewards program that is highly rated, and it is easy to rack up rewards points. There are different loyalty levels depending on how many games you play, and your loyalty level determines what kind of rewards you are available for. As long as you are in the state of New Jersey, then you won't have any trouble logging on and playing games at the Resorts Online Casino. The website for the casino is ResortsCasino.com, and it can be accessed using any web browser of your choosing. Resorts Casino also offers an app that can be downloaded on both Android and iOS platforms. The casino app is highly rated and operates in the same way as the online casino site. Resorts Casino offers seven different deposit methods, including accepting both Visa and Mastercard payments. It also offers a Resorts Card, which can make the deposit and withdrawal process relatively easy. PayPal is another popular banking method that is accepted by Resorts Casino as a deposit and withdrawal method as well. Check out the Resorts Online Casino site for a complete list of banking options. Resorts Online Casino has a great customer service department, and you should be able to get an answer to any of your questions in a short amount of time. The online casino has an icon that will take you to a section of Frequently Asked Questions that is the best place to start. Resorts Online Casino also has a dedicated customer service 1-800 number that you can call, as well as a live chat feature on its site or mobile app. Resorts Online Casino Review Summary If you still aren't convinced that Resorts Online Casino is the right option for you, then we suggest opening up an account and giving it a shot. Resorts Casino is consistently ranked as one of the top options in New Jersey, and it's really pretty easy to see why. It offers the largest selection of games in the state, and it has plenty of banking options as well. If you need a question answered, then its customer service department will be there to help. There is a reason that so many people choose Resorts Online Casino for their online gaming needs. Jacksonville Jaguars at Baltimore Ravens Betting Preview Jan 19, 2022 8:31 AM ET | By: Field Level Media Sunday, December 20, 2020, M&T Bank Stadium, Baltimore, Maryland, 1 p.m. ET Jaguars at Ravens Betting Preview: Jaguars (+12/5/-110), Ravens (-12.5/-110) Jacksonville Jaguars Running back James Robinson became the fourth undrafted rookie in NFL history to total at least 1,000 rushing yards, and the fastest to do so, reaching the feat in the fourth quarter of last week's loss to the Titans. He is third in the league in rushing with 1,035 yards, to go along with seven touchdowns. He's also is well on pace to break the league mark set by the Colts' Dominic Rhodes, who ran for 1,104[...] Chicago Bears at Minnesota Vikings Betting Preview 8:31 AM ET Sunday, December 20, 2020, U.S. Bank Stadium, Minneapolis, Minnesota, 1 p.m. ET Bears at Vikings Betting Preview: Bears (+3.5/-110), Vikings (-3.5/-110) Chicago Bears Chicago snapped a[...] Cleveland Browns at New York Giants Betting Preview Sunday, December 20, 2020, MetLife Stadium, East Rutherford, New Jersey, 8:20 p.m. ET Browns at Giants Betting Preview: Browns (-5/-112), Giants (+5/-109) [...] San Francisco 49ers vs. Dallas Cowboys Betting Preview Sunday, December 20, 2020, AT&T Stadium, Arlington, Texas, 1 p.m. ET 49ers at Cowboys Betting Preview: 49ers (-3/-110), Cowboys (+3/-110) San Francisco[...] Detroit Lions at Tennessee Titans Betting Preview Sunday, December 20, 2020, Nissan Stadium, Nashville, Tennessee, 1 p.m. ET Lions at Titans Betting Preview: Lions (+11/-106), Titans (-11/-114) Detroit Lions Give interim coach Darrell[...] Houston Texans at Indianapolis Colts Betting Preview Sunday, December 20, 2020, Lucas Oil Stadium, Indianapolis, Indiana, 1 p.m. ET Texans at Colts Betting Preview: Texans (+7/-108), Colts (-7/-112) Houston Texans The heartbreak[...] Kansas City Chiefs at New Orleans Saints Betting Preview Sunday, December 20, 2020, Mercedes-Benz Superdome, New Orleans, Louisiana, 4:25 p.m. ET Chiefs at Saints Betting Preview: Chiefs (-3/-110), Saints (+3/-110) Kansas City Chiefs The[...] New York Jets at Los Angeles Rams Betting Preview Sunday, December 20, 2020, SoFi Stadium, Inglewood, California, 4:05 p.m. ET Jets at Rams Betting Preview: Jets (+17.5/-114), Rams (+17.5/-106) New York[...] New England Patriots at Miami Dolphins Betting Preview Sunday, December 20, 2020, Hard Rock Stadium, Miami Gardens, Florida, 1 p.m. ET Patriots at Dolphins Betting Preview: Patriots (+2/-112), Dolphins (-2/-109) [...] Seattle Seahawks at Washington Football Team Betting Preview Sunday, December 20, 2020, FedExField, Landover, Md., 1 p.m. ET Seahawks at Washington Football Team Betting Preview: Seahawks (-5.5/-110), Washington Football Team[...] Philadelphia Eagles at Arizona Cardinals Betting Preview Sunday, December 20, 2020, State Farm Stadium, Glendale, Arizona, 4:05 p.m. ET Eagles at Cardinals Betting Preview: Eagles (+6.5/-110), Cardinals (-6.5/-110) Philadelphia[...] Tampa Bay Buccaneers at Atlanta Falcons Betting Preview Sunday, December 20, 2020, Mercedes-Benz Stadium, Atlanta, Georgia, 1 p.m. ET Buccaneers at Falcons Betting Preview: Buccaneers (-5.5/-114), Falcons (+5.5/-106) Tampa Bay Buccaneers It's safe to[...] OddsUSA's NFL Best Picks for Week 15 Happy Thursday football fans and welcome back to our weekly column in which we attempt to predict the outcome of some of[...] Updated MVP Odds: Patrick Mahomes continues to lead the MVP race Greeting and salutations football fans. We're now 14 weeks into the NFL season, which means that we are officially in the home[...] Super Bowl 55 Odds Update: Week 14 Good afternoon football fans. We are just three weeks away from the end of the 2021 NFL season. Last week, the teams[...] NFL Week 15: Top Prop Bets For All 32 Teams With just three weeks of regular-season action remaining, let's take a look at a prop bet for each NFL team. Arizona Cardinals[...] Thursday Night Football: Chargers-Raiders Best Odds and Sportsbook Promos Can you believe it? It's Week 15 of the NFL season! This week's schedule even includes some Saturday NFL action, giving us[...] Los Angeles Chargers at Las Vegas Raiders Betting Preview Thursday, December 17, 2020, Allegiant Stadium, Las Vegas, Nevada, 8:20 p.m. ET Chargers at Raiders Betting Preview: Chargers (+3.5), Raiders (-3.5) Los[...] Baltimore Ravens at Cleveland Browns Betting Preview Monday, December 14, 2020, FirstEnergy Stadium, Cleveland, Ohio, 8:15 p.m. ET Ravens at Browns Betting Preview: Ravens (-2.5/-110), Browns (+2.5/-110) Baltimore Ravens[...] Green Bay Packers at Detroit Lions Betting Preview Sunday, December 13, 2020, Ford Field, Detroit, Michigan, 4:25 p.m. ET Packers at Lions Betting Preview: Packers (-7.5/-110), Lions (+7.5/-110) Green Bay[...] ResortsCasino is available in the following states: New Jersey
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,856
{"url":"https:\/\/www.forexcycle.com\/daily-forex-reports\/25304-usdcad-analysis-july-13-2011.html","text":"# USD\/CAD analysis (July 13, 2011)\n\nWe were considering a bullish resumption on this pair lately and if the market has made an attempt towards 0,98 as expected, it wasn\u2019t able to hold above 0,97.\nIn fact, the reversal was quite impressive yerterday with a rush back towards the 0,96 area.","date":"2020-10-23 05:33:14","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8978844285011292, \"perplexity\": 4723.965886263117}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-45\/segments\/1603107880656.25\/warc\/CC-MAIN-20201023043931-20201023073931-00184.warc.gz\"}"}
null
null
{"url":"https:\/\/rdrr.io\/cran\/DescTools\/man\/SomersDelta.html","text":"# SomersDelta: Somers' Delta In DescTools: Tools for Descriptive Statistics\n\n## Description\n\nCalculate Somers' Delta statistic, a measure of association for ordinal factors in a two-way table. The function has interfaces for a table (matrix) and for single vectors.\n\n## Usage\n\n 1 SomersDelta(x, y = NULL, direction = c(\"row\", \"column\"), conf.level = NA, ...) \n\n## Arguments\n\n x a numeric vector or a table. A matrix will be treated as table. y NULL (default) or a vector with compatible dimensions to x. If y is provided, table(x, y, ...) is calculated. direction direction of the calculation. Can be \"row\" (default) or \"column\", where \"row\" calculates Somers' D (R | C) (\"column dependent\"). conf.level confidence level of the interval. If set to NA (which is the default) no confidence interval will be calculated. ... further arguments are passed to the function table, allowing i.e. to set useNA. This refers only to the vector interface.\n\n## Details\n\nSomers' D(C|R) and Somers' D(R|C) are asymmetric modifications of \u03c4_b and Goodman-Kruskal's Gamma. C|R indicates that the row variable x is regarded as the independent variable and the column variable y is regarded as dependent. Similarly, R|C indicates that the column variable y is regarded as the independent variable and the row variable x is regarded as dependent. It is logically very similar to Gamma, but differs in that it uses a correction only for pairs that are tied on the dependent variable. As Gamma and the Taus, D is appropriate only when both variables lie on an ordinal scale.\nSomers' D is computed as\n\nD(C | R) = \\frac{P-Q}{n^2 - \u2211(n_i.^2)}\n\nwhere P equals twice the number of concordances and Q twice the number of discordances and n_i. rowSums(tab). Its range lies [-1, 1]. The interpretation of d is analogous to Gamma.\n\n## Value\n\na single numeric value if no confidence intervals are requested\nand otherwise a numeric vector with 3 elements for the estimate, the lower and the upper confidence interval\n\n## Author(s)\n\nAndri Signorell <[email\u00a0protected]>\n\n## References\n\nAgresti, A. (2002) Categorical Data Analysis. John Wiley & Sons, pp. 57\u201359.\n\nBrown, M.B., Benedetti, J.K.(1977) Sampling Behavior of Tests for Correlation in Two-Way Contingency Tables, Journal of the American Statistical Association, 72, 309-315.\n\nGoodman, L. A., & Kruskal, W. H. (1954) Measures of association for cross classifications. Journal of the American Statistical Association, 49, 732-764.\n\nSomers, R. H. (1962) A New Asymmetric Measure of Association for Ordinal Variables, American Sociological Review, 27, 799\u2013811.\n\nGoodman, L. A., & Kruskal, W. H. (1963) Measures of association for cross classifications III: Approximate sampling theory. Journal of the American Statistical Association, 58, 310\u2013364.\n\nThere's an implementation of Somers's D in Frank Harrell's Hmisc somers2, which is quite fast for large sample sizes. However it is restricted to computing Somers' Dxy rank correlation between a variable x and a binary (0-1) variable y.\nConDisPairs yields concordant and discordant pairs\n\nOther association measures:\nKendallTauA (tau-a), KendallTauB (tau-b), cor (method=\"kendall\") for tau-b, StuartTauC (tau-c), GoodmanKruskalGamma\nLambda, GoodmanKruskalTau, UncertCoef, MutInf\n\n## Examples\n\n 1 2 3 4 5 6 7 8 9 10 # example in: # http:\/\/support.sas.com\/documentation\/cdl\/en\/statugfreq\/63124\/PDF\/default\/statugfreq.pdf # pp. S. 1821 tab <- as.table(rbind(c(26,26,23,18,9),c(6,7,9,14,23))) # Somers' D C|R SomersDelta(tab, direction=\"column\", conf.level=0.95) # Somers' D R|C SomersDelta(tab, direction=\"row\", conf.level=0.95) \n\n### Example output\n\n somers lwr.ci ups.ci\n0.4426720 0.2785571 0.6067869\nsomers lwr.ci ups.ci\n0.2569444 0.1592938 0.3545951\n\n\nDescTools documentation built on Jan. 18, 2020, 1:09 a.m.","date":"2020-02-22 06:17:15","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.49868154525756836, \"perplexity\": 5280.073012623298}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875145654.0\/warc\/CC-MAIN-20200222054424-20200222084424-00550.warc.gz\"}"}
null
null
Q: c# - Dictionary> to DataGrid i have the following problem: I have a populated dictionary of lists, each lift has a specified, known length, and contains only strings, an example element would be: d1[key] = [ "Text1", "Text2", "Text3", "Text4", "0", "0", "0", "0", "0" ] The datagrid will have predeclared columns that correspond to the key, and each of the 8 list elements, for a total of 9 columns. I have written this to try and populate the DataGrid, is there a more efficient way, basicly writing every single line onto the datagrid. The Dictionary may have upwards of 1k keys though. public static void DictionaryToDataGrid(Dictionary<string, List<string>> inputdict1) { Dictionary<string, List<string>> d1 = inputdict1; foreach (KeyValuePair<string, List<string>> item in d1) { DatagridForm.grid.Rows.Add(item.Key, item.Value[0], item.Value[1], item.Value[2]); } } Is there a quicker, more efficient way to do this? Thank you. A: A shorter version would be: foreach (KeyValuePair<string, List<string>> item in inputdict1) DatagridForm.grid.Rows.Add(item.Key, item.Value[0], item.Value[1], item.Value[2]); It is also a bit more efficient since you're not creating a new variable d1, and referencing the contents of inputdict1 to it (thanks apocalypse). Hope this helps.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,683
Henry "Hank" Wallman (1915–1992) was an American mathematician, known for his work in lattice theory, dimension theory, topology, and electronic circuit design. A native of Brooklyn and a 1933 graduate of Brooklyn College, Wallman received his Ph.D. in mathematics from Princeton University in 1937, under the supervision of Solomon Lefschetz and became a faculty member at the Massachusetts Institute of Technology, where he was associated with the Radiation Laboratory. During World War II he did classified work at MIT, possibly involving radar. In 1948, he left MIT to become a professor of electrotechnics at the Chalmers University of Technology in Gothenburg, Sweden, which awarded him the Chalmers medal in 1980 and where he eventually retired. In 1950 he was elected as a foreign member to the Swedish Royal Academy. He was elected a member of the Royal Swedish Academy of Engineering Sciences in 1960 and of the Royal Swedish Academy of Engineering Sciences in 1970. The disjunction property of Wallman is named after Wallman, as is the Wallman compactification, and he co-authored an important monograph on dimension theory with Witold Hurewicz. Wallman was also a radio enthusiast, and in the postwar period co-authored a book comprehensively documenting what was known at the time about vacuum tube amplification technology, including new developments such as showing how the central limit theorem could be used to describe the rise time of cascaded circuits. At Chalmers, Wallman helped build the Electronic Differential Analyser, an early example of an analog computer, and performed pioneering research in biomedical engineering combining video displays with X-ray imaging. References 1915 births 1992 deaths Lattice theorists Topologists American biomedical engineers 20th-century American mathematicians Swedish mathematicians Princeton University alumni Massachusetts Institute of Technology faculty Academic staff of the Chalmers University of Technology Members of the Royal Swedish Academy of Sciences Members of the Royal Swedish Academy of Engineering Sciences Brooklyn College alumni
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,035
Q: Why is the extra space coming at the end while trying to convert an upper case letters to lowercase and vice versa in java I wrote a code in java to convert upper case characters into lowercase and vice versa.I checked each edge cases for example for an empty string,invalid string or a string containing spaces. Ques:Why is the extra space comes when i use the following input String s=new String(" ") OutputShown 3232 String s=new String("ab cd"); OutputShown 97 98 32 99 10032AB CD Below is my Code class Change { static void change(String p) { if (p == null) { System.out.print("Null String"); } int f = p.length(); if (f == 0) { System.out.print(" Empty String"); } else { int u = 0; int i = 0; byte d = 32; char c[] = p.toCharArray(); byte b[] = p.getBytes(); for (i = 0; i < b.length; i++) { System.out.print(" "); System.out.print(b[i]); } for (i = 0; i < b.length; i++) { if (b[i] >= 65 && b[i] <= 90) { b[i] = (byte) (b[i] + d); } else { if (b[i] >= 97 && b[i] <= 122) { b[i] = (byte) (b[i] - d); } else { if (b[i] == ' ') { System.out.print(b[i]); } else { System.out.print("Not a valid character"); break; } } } } String s2 = new String(b); System.out.print(s2); } } public static void main(String s[]) { String s2 = new String("ab cd"); change(s2); } } A: You are printing each space twice : First time here : for(i=0;i<b.length;i++) { System.out.print(" "); System.out.print(b[i]); // first time } Second time here : if(b[i]==' ') { System.out.print(b[i]); // second time } I'm assuming the latter should be removed. For example, for the "ab cd" input, the first loop prints : 97 98 32 99 100. Then the condition of if(b[i]==' ') prints another 32. A: @Jalaj Chawala-- use below code; i have made some changes in your existing code-- class Change { static void change(String p) { if (p == null) { System.out.print("Null String"); } int f = p.length(); if (f == 0) { System.out.print(" Empty String"); } else { int u = 0; int i = 0; byte d = 32; char c[] = p.toCharArray(); byte b[] = p.getBytes(); for (i = 0; i < b.length; i++) { if (b[i] >= 65 && b[i] <= 90) { b[i] = (byte)(b[i] + d); } else { if (b[i] >= 97 && b[i] <= 122) { b[i] = (byte)(b[i] - d); } else { if (b[i] == ' ') { // System.out.print(b[i]); } else { System.out.print("Not a valid character"); break; } } } } String s2 = new String(b); System.out.print(s2); } } public static void main(String s[]) { String s2 = new String("ab c"); change(s2); }
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,299
Islamic Banks One Step Closer To Fruition In Morocco Sep 4, 2014 by admin Morocco's Economic, Social and Environmental Council (CESE) weighed in on the Islamic bank bill last Thursday (August 28th), proposing two changes. The president of the Chamber of Councillors, Mohamed Cheikh Biadillah, asked the body last month to render an opinion on the legitimacy of the draft's legal provisions. Two negative remarks were made by the CESE. The first related to a lack of consumer information necessary to avoid unfair marketing by Islamic banks. The second dealt with the need to clarify the roles of the National Council of Ulema and the central bank in the oversight of the sector. According to Eastern Regional Council of Ulema chairman and CESE member Mustapha Benhamze, the establishment of Islamic banks in Morocco was a necessity. However, civil society activist and fellow CESE member Hakima Naji opposed the intervention of the High Council of Ulema in the financial sector. She criticised the idea of religious management of finance and said that the central bank had the necessary ability to both traditional and Islamic banks. CESE chief and former Economy Minister Nizar Baraka highlighted the importance of creating Islamic banks by explaining that alternative products already existed in Morocco and had accounted for almost a billion dirhams of savings since 2010. This amount, he said, will be boosted by the creation of participatory banks. This type of bank will also make it possible to increase the percentage of people who use banking services in Morocco, which currently stands at 57 percent, he said. "The goal is to increase this rate to two-thirds of the population," Baraka noted. CESE underlined in its opinion that more communication about participatory products was absolutely essential in order to explain the concept properly and to avoid distinctions being made between "halal" and "haram" products, sociologist Karim Chabli said. "We need to be careful that we don't fall into this trap and create pointless debates that can only harm society when the reason for licensing Islamic banks is to respond to the needs of a category of the Moroccan population," he said. Trader Mohamed Chadi is among the citizens pleased that the CESE adopted the draft law on participatory banks. "I've been waiting for years for Islamic banks to be created in Morocco so that I can manage my money in accordance with my religious beliefs. It's true that I'm banking with a normal bank at the moment, but that's because I have no other option," he told Magharebia, underlining that he has friends who take the risk of keeping their savings at home due to the lack of Islamic banks in the kingdom. Hamza Souieh, an employee, also welcomes the introduction of Islamic finance in Morocco. He has been aspiring to buy a home for years but refuses to take out a loan. "If Islamic banks are created, I'll soon be able to buy a flat," he told us optimistically. Originally published on www.zawya.com Previous Post: One Day Specialized Training Workshop On Halal Gelatin Organized By Halal Research Council Next Post:Islamic Economy Awards 2014 Gets Immense Response
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,973
Home News WisBuisness: State unemployment rate falls WisBuisness: State unemployment rate falls Wisconsin's unadjusted unemployment rate for SeptemberDepartment of Workforce Development was 4.6 percent, down from the 4.9 percent in August. State Department of Workforce Development Secretary Roberta Gassman said he national rate for September fell to 4.5 percent from 4.6 percent in August. She said the start of school and changing weather across the state were among the factors influencing Wisconsin's labor market last month. Total Wisconsin non-farm jobs increased 3,600 to 2,899,500 from August to September 2007. Private sector jobs fell over-the-month by 22,000, led by a drop in Leisure & Hospitality jobs, which were down 12,000. Most of the month's increase came from a rise in Government jobs, which were up 25,600. Most of that increase, 20,900, came from teachers and staff returning to school and the impact on local government. See release at http://wisbusiness.com/index.iml?Article=107925
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,808
As they walked, Trevor noticed that they were being followed. He tried to lose them by taking a turn down a back alley, but instead walked straight into a German spy's gun. The man ordered Trevor to give the notebook back; he refused, instead headbutting the man. He told Diana to stay back at the man took aim and shot at them. Diana reached out her arm, blocking the bullet off her gauntlet and saving Trevor's life. Diana then fought the German spies single-handedly; her spectacles were crushed in the fight. Amazon Shield: Wonder Woman's Amazonian shield, which she uses to protect herself from other weapons and energy blasts (in tandem with her bracelets). Much like her bracelets, it is nigh-indestructible, capable of deflecting even Ares' lightning bolts and Doomsday's thermal attacks. It can also be used as an offensive weapon, with Wonder Woman smashing it hard into the legs of Doomsday, thus momentarily managing to knock him down. Magic (Formerly): When she was a child, Diana was marked by the goddess Hecate and bestowed with a fraction of her magical ability.[108] This power lay dormant until it was activated by the Upside-Down Man. Zatanna remarked that Wonder Woman's magical power was unlike anything she had ever seen or felt, and Diana possessed at least enough power to cast out the Upside-Down Man, an immensely powerful demon, from the world.[109] After the Justice League Dark defeated Hecate, the Witchmarked's power was taken from them and absorbed by Circe.[110] Voiced by Roasrio Dawson.,Wonder Woman makes an appearance in Justice League Throne of Atlantis. A story based on Geoff Johns' Throne of Atlantis. The movie came out in January, 2015. In this film, she first starts out in Athens, Greece, meeting Superman. They passionately kiss and are later seen eating at a cafe, in civilain guise. They bump into Lois Lane and after a small conversation, are spotted by Shazam and Cyborg, taking them away from their date on the grounds that the League needs a meeting. Wonder Woman grossed $412.6 million in the United States and Canada and $409.3 million in other territories for a worldwide total of $821.8 million, against an estimated production budget of $120–150 million.[5] Estimates for the number the film needed to surpass internationally in order to cover its production and promotional costs and break even ranged from $300 million[174] to $460 million.[175] Deadline Hollywood calculated the net profit of the film to be $252.9 million, when factoring together all expenses and revenues, making it the 6th most profitable release of 2017.[176] After Crisis on Infinite Earths, George Pérez rebooted the character in 1987. She wore an outfit similar to her 1970s one, but now with a larger glowing golden belt.[194] This outfit continued until William Messner-Loebs' run, which had Diana pass on the role of Wonder Woman to Artemis.[194] No longer Wonder Woman, Diana sported a new black biker-girl outfit designed by artist Mike Deodato Jr.[194] After John Byrne took over writing and art duties, he redesigned the Wonder Woman outfit (Diana was reinstated as Wonder Woman at the end of Loebs' run) and joined the emblem and belt together.[194] Issue #600 introduced Wonder Woman to an alternate timeline created by the Gods in which Themyscira had been destroyed and the Amazons scattered around the world.[42] In this timeline, Diana is an orphan raised in New York who is learning to cope with her powers. The entire world has forgotten Wonder Woman's existence and the main story of this run was of Diana trying to restore reality even though she does not properly remember it herself.[126] Diana has no memories of her prior adventures as Wonder Woman, recollecting her memories in bits and pieces and receiving different abilities and resources (such as the power of flight and her lasso) during the progression of her adventure. A trio of Death Goddesses called The Morrigan acted as Wonder Woman's main enemies.[127] Diana ultimately defeats the evil goddesses and returns everything back to normal.[128] Wonder Woman figures were released for Justice League and Justice League Unlimited. Prior to that, there were plans for a show called Wonder Woman and the Star Riders, which similar to Sailor Moon, would have featured Wonder Woman leading a team of teen magical girls. Prototype dolls for the series were made. There was also a statue from the animated Wonder Woman movie and a Wonder Woman action figure for the Justice League War movie. Wonder Woman dolls and figures were also released for the DC Super Hero Girls line. After Crisis on Infinite Earths, the character's origin was slightly retold by Greg Potter and George Perez. In this version, the Amazons were reincarnations of the souls of abused and murdered women from ancient days. In 1200 B.C. a debate occurred on Mount Olympus on how mankind should be made to relate to the gods. Ares, the god of war and destruction, wanted to descend upon the world with his army and crush mankind submission. This was opposed by the others gods present including Artemis, who wanted peace and suggested creating a new race that would lead humans on the right path. Zeus rejected their arguments, and they decided to proceed without his blessing. With the aid of Charon the ferryman, the gods reached the Womb of Gaea, where the souls of women who were abused and murdered at the hands of men were preserved by Gaea herself. Artemis then sent the souls to Greece where they reincarnated into adult women. Aphrodite observed that one soul still remains in the Womb, to which Athena replied that the time had not yet come for that one. The new race in Greece were approached by the goddesses, who bestowed upon them several blessings, charging them with the purpose of leading humanity in the ways of Gaea. They then appointed Hippolyte and Antiope as co-rulers. The civilization is named the Amazons. Stories of this civilization spread throughout Greece and reached the ears of Heracles, who was being manipulated by Ares into attacking the Amazons. Heracles approached the Amazons but was defeated by Hippolyte, upon which he pretended friendship and declared the Amazons allies. When their guard was down, the Greeks drugged the Amazons, taking Hippolyte, Antiope and the other survivors captive. In her cell, Hippolyte is freed by Athena ,who reminds her of her purpose and asked her to avoid revenge and pursue peaceful means. Hippolyte escaped and freed the rest of the Amazons. She shared Athena's message to the Amazons, but blinded by their thirst for revenge, they ruthlessly slaughter the remaining men. Antiope gave Hippolyta her girdle and left to pursue revenge The goddesses appeared and told them they had failed in their purpose and banished them to an island to guard the terrible evil within, as penance. They were granted immortality as long as they did not stray from their new purpose, which would eventually purify their souls. The Amazons built a nation and lived there for 4,000 years. It is during this time that Hippolyte, sole leader of the Amazons, felt an unexplained yearning. Menalippe, the Oracle, told her she was the only Amazon pregnant at the time of her previous incarnation's death, and thus the yearning she felt was the call of her unborn child. As per her advise, Hippolyte went to the shore at sunrise and made a clay figure of a baby. She then cried out to Artemis. The gods, recognizing it was time for the remaining soul in Gaea's womb to depart, infused it into the clay form, which then incarnated as a real child. Blessed with Gaea's greatest gift, life, the gods present bestowed their gifts upon the newborn: Demeter granted the baby great strength, Aphrodite granted her great beauty and a loving heart, Athena granted her great wisdom, Artemis granted her the eye of the hunter and unity with beasts, Hestia granted her sisterhood with fire, and Hermes gave her great speed and the power of flight. Hippolyte named her after a holy warrior, Diana, and she grew up knowing the love of a thousand mothers. Thus Diana of Themyscira was born. Wonder Woman was created by William Moulton Marston and Harry G. Peter, and has a lengthy publication history. This history has sometimes included a sidekick Wonder Girl and many villains. Since her debut she has become one of the most popular and recognizable DC Comics characters, along with Batman and Superman. She first appeared in All-Star Comics #8. (1941) 13 years after Slipknot's imprisonment, after examining the photo of Wonder Woman and the Wonder Men taken in 1918 Belgium, Lex Luthor uses facial recognition software to deduce that the great Amazon warrior is in fact still alive, under the alias of "Diana Prince," working at the Louvre Museum, and he obtains footage of Diana in Paris, France, which has her exiting a taxi and entering a shop, in civilian clothing.[5] The Silver Age format for comic books also did not generally favour a lot of story arcs, or at least, not memorable ones. In this period though the character did undergo some consistent changes as she battled a variety of common foes including Kobra, but the changed format gave her the ability to develop more as a character. The silver age stories of Wonder Woman can be broken into a few general arcs – the depowered stories (in the mod girl phase), undergoing tests to re-enter the Justice League of America, a golden age story about her work during the Second World War, her adventures as an astronaut for NASA, the hunt for Kobra, and eventually the return of Steve Trevor and the internal politics of working at the Pentagon. The most famous story which she was involved with at this time was "For the Man Who Has Everything", a story focused on Superman, but also involving herself and Batman. The first major story arc which she was part of was Crisis on Infinite Earths, which also ended her silver age appearances. Steve Trevor died at the end of Wonder Woman after sacrificing himself to ensure that a plane full of deadly gas couldn't harm anyone on the ground. Is this Steve Trevor the same Steve Trevor that we saw in Wonder Woman who was transported to 1984 because of something like time travel, a descendant of Steve Trevor's who is also named Steve Trevor (and looks exactly like Pine), a clone, or something else entirely? (It's a comic book movie, so anything is possible.) Following the events of Infinite Crisis, she disappeared for a year in order to rediscover herself, and took part briefly in the events of 52. In the span of One Year Later, she was re-imagined once again and was forgiven by Batman and Superman while given her third ongoing monthly title. Batman helped her establish a role at the Department of Metahuman Affairs under the name of Diana Prince (paying homage to her golden age alter ego.) She worked alongside Tom Tresser and eventually became romantically involved with him. A move among fans across the different companies occurred with characters reverting to their original numbering of series (this for instance happened to Iron Man at Marvel as well) and the third Wonder Woman series was relaunched with Wonder Woman #600. This was actually accurate at the time as it was the indeed the 600 issue released (not including issues numbered otherwise such as with a zero or a million). Issue 600 was used as a chance to reinvent the character as she discovers herself with no memories and in a new costume. This was a short lived experiment as the entire DC lineup was soon to be re-imagined into the new 52, though certain aspects of her redesigned costume remained. Wonder Woman appears as one of the lead characters in the Justice League title written by Geoff Johns and drawn by Jim Lee that was launched in 2011 as part of The New 52.[152] In August 2012, she and Superman shared a kiss in Justice League Vol 2 #12, which has since developed into a romantic relationship.[153][154][155] DC launched a Superman/Wonder Woman series that debuted in late 2013, which focuses both the threats they face together, and on their romance as a "Power Couple".[156][157] "Noted Psychologist Revealed as Author of Best-Selling 'Wonder Woman,'" read the astonishing headline. In the summer of 1942, a press release from the New York offices of All-American Comics turned up at newspapers, magazines and radio stations all over the United States. The identity of Wonder Woman's creator had been "at first kept secret," it said, but the time had come to make a shocking announcement: "the author of 'Wonder Woman' is Dr. William Moulton Marston, internationally famous psychologist." The truth about Wonder Woman had come out at last. Born among the legendary Amazons of Greek myth Princess Diana has a fierce warrior's heart while being an emissary of peace. On a hidden island paradise she was trained in the arts of combat as well as justice and equality. Diana ventured into the 'world of men' armed with magical gifts from the Gods and a message for all men and women - that all the world can be united through compassion strength and understanding. Hybrid Physiology: Due to her Amazonian and Old God heritage, Wonder Woman possesses the superhuman abilities typical of these two species, such as superhuman strength, durability, speed, reflexes, agility and stamina, as well as an accelerated regenerative healing factor and the ability to live for thousands of years without visibly aging. In addition, she possesses incredible supernatural powers that allow her to generate and manipulate divine energy in the form of powerful shock waves, as well as possess some mastery over divine electricity. Wonder Woman's amazing abilities far surpass those of any other Amazon, and they are enough to rival the power of an Ancient God, such as Ares or a New God, such as Steppenwolf. Therefore, Wonder Woman is the second most powerful member of the Justice League, second only to Superman. Wonder Woman and the other heroes were finally released from the Firestorm Matrix when Batman used the Lasso of Truth on Firestorm. Superman was still infected with the Kryptonite shard inside his nervous system, but Lex Luthor was able to extract it, saving Superman's life. Luthor also assembled a group of villains that defeated the Crime Syndicate. Later, at the Batcave, Wonder Woman and the Justice League talked about the enemy that destroyed the Crime Syndicate's world and came to the conclusion that Darkseid would return.[74] Wonder Woman had a minor role in Young Justice. Initially, the character was going to be excluded from the show due to legal red tape, but was included at the last minute. However, as a result of only being cleared for use late in the production cycle, she only had several speaking appearances. In the second season, she could be seen as the mentor of Wonder Girl. She was voiced by Maggie Q. The original significance of Wonder Woman had the intentions of influencing many women of all ages, displaying the physical and mental strengths, values, and ethical attributes that not only men acquire. "Wonder Woman symbolizes many of the values of the women's culture that feminists are now trying to introduce into the mainstream: strength and self-reliance for women; sisterhood and mutual support among women; peacefulness and esteem for human life; a diminishment both of 'masculine' aggression and of the belief that violence is the only way of solving conflicts," Steinem wrote at the time.[223] As Wonder Woman joins the Battle against Doomsday, she arrives just in time to save Batman from Doomsday's lethal thermal blast, deflecting the beams with her indestructible bracelets. She then jointly attacks Doomsday with Superman while Batman tries to expose the creature to Kryptonite, allowing its destruction. She relentlessly battles the monster, and despite Doomsday being stronger, Wonder Woman held her own, parrying a tremendous punch with the Sword of Athena, and then slicing off Doomsday's right arm with it. Eventually, Batman baits the monster into coming closer to her, allowing Wonder Woman to hurl the noose of her unbreakable Lasso of Hestia around his torso. ^ Sanderson, Peter (September–October 1981). "Thomas/Colan Premiere Wonder Woman's New Look". Comics Feature. New Media Publishing (12/13): 23. The hotly-debated new Wonder Woman uniform will be bestowed on the Amazon Princess in her first adventure written and drawn by her new creative team: Roy Thomas and Gene Colan...This story will appear as an insert in DC Comics Presents #41.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,529
// Juul Joosten 2013 #include "Texture2D.h" #include <IndyCore/CoreDefines.h> #include "GL/glew.h" #include "stb_image/stb_image.h" #include <iostream> namespace Indy { Texture2D::Texture2D( const char* const textureFile) : m_textureData(NULL), m_textureID(0), m_width(0), m_height(0), m_numComponents(0), m_componentSizeInBytes(0), m_isOnGPU(false), m_isLocalDataAvailable(false), m_hasMipMaps(false), m_isLoadedFromImage(true) { // load texture data loadTextureData( textureFile); } Texture2D::Texture2D( const unsigned int width, const unsigned int height, const unsigned char* textureData, const unsigned char numComponents, const unsigned char componentSizeInBytes) : m_textureData( new unsigned char[width * height * numComponents * componentSizeInBytes]), m_textureID(0), m_width(width), m_height(height), m_numComponents(numComponents), m_componentSizeInBytes(componentSizeInBytes), m_isOnGPU(false), m_isLocalDataAvailable(true), m_hasMipMaps(false), m_isLoadedFromImage(false) { if(textureData != NULL) memcpy(m_textureData, textureData, m_width * m_height * m_componentSizeInBytes * m_numComponents); } Texture2D::~Texture2D( void) { DeleteGPUData(); DeleteLocalData(); } void Texture2D::CreateGPUTexture( const bool generateMipMaps /*= true*/) { if( !m_isLocalDataAvailable) BREAKPOINT(Local texture data is not available!); // generate texture ID glGenTextures( 1, &m_textureID); // send texture data to openGL Bind(); m_hasMipMaps = generateMipMaps; // set texture filters to bilinear if(generateMipMaps) glTexParameteri( GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); // TODO: make the next block better adjustable by external parameters to override auto select settings // select internal format GLenum type = GL_UNSIGNED_BYTE; GLint internalFormat = 0; switch(m_componentSizeInBytes) { case 1: // unsigned char buffer internalFormat = m_numComponents == 4 ? GL_RGBA8 : GL_RGB8; internalFormat = m_numComponents == 1 ? GL_R8 : internalFormat; type = GL_UNSIGNED_BYTE; break; case 2: // half float point buffer internalFormat = m_numComponents == 4 ? GL_RGBA16F : GL_RGB16F; internalFormat = m_numComponents == 1 ? GL_R16F : internalFormat; type = GL_FLOAT; break; case 4: // floating point buffer internalFormat = m_numComponents == 4 ? GL_RGBA32F : GL_RGB32F; internalFormat = m_numComponents == 1 ? GL_R32F : internalFormat; type = GL_FLOAT; break; } // select format from RGBA, RGB or RED GLenum format = m_numComponents == 4 ? GL_RGBA : GL_RGB; format = m_numComponents == 1 ? GL_RED : format; glTexImage2D( GL_TEXTURE_2D, 0, internalFormat, (GLsizei)m_width, (GLsizei)m_height, 0, format, type, (GLvoid*)m_textureData); UnBind(); m_isOnGPU = true; } void Texture2D::DeleteGPUData( void) { glDeleteTextures(1, &m_textureID); m_isOnGPU = false; } void Texture2D::DeleteLocalData( void) { if( m_textureData != NULL) { if(m_isLocalDataAvailable) { if( m_isLoadedFromImage) stbi_image_free( m_textureData); else delete[] m_textureData; } m_textureData = NULL; } m_isLocalDataAvailable = false; } void Texture2D::Bind( void) { glBindTexture( GL_TEXTURE_2D, m_textureID); } void Texture2D::UnBind( void) { glBindTexture( GL_TEXTURE_2D, 0); } void Texture2D::SetTextureWrapMode( const GLint wrapModeS /* = GL_REPEAT */, const GLint wrapModeT /* = GL_REPEAT */) { glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrapModeS ); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrapModeT ); } void Texture2D::SetSamplerFilter( const GLint minSampler /* = GL_NEAREST_MIPMAP_LINEAR */, const GLint magSampler /* = GL_LINEAR */) { /* TODO: let people set a global filter? if( GLEW_EXT_texture_filter_anisotropic) { Float32 maxAnisotropic = 0; glGetFloatv( GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &maxAnisotropic); glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, maxAnisotropic); } */ glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, minSampler ); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, magSampler ); } void Texture2D::SetTextureBorderColor( const GLuint color /* = 0x0 */) { GLfloat colorf[4]; colorf[0] = (GLfloat)(color >> 24) / 255.0f; colorf[1] = (GLfloat)((color >> 16) & 255) / 255.0f; colorf[2] = (GLfloat)((color >> 8) & 255) / 255.0f; colorf[3] = 0.0f; glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, colorf); } void Texture2D::UpdateSubRegion( const unsigned int xOffset, const unsigned int yOffset, const unsigned int width, const unsigned int height, const char* const data) { #ifdef DEBUG if( m_hasMipMaps ) printf("Update can be done but mipmaps stay uneffected\n"); #endif if( !m_isOnGPU) BREAKPOINT(Texture is not on GPU); // select internal format GLenum type = GL_UNSIGNED_BYTE; GLint internalFormat = 0; switch(m_componentSizeInBytes) { case 1: // unsigned char buffer internalFormat = m_numComponents == 4 ? GL_RGBA8 : GL_RGB8; internalFormat = m_numComponents == 1 ? GL_R8 : internalFormat; type = GL_UNSIGNED_BYTE; break; case 2: // half float point buffer internalFormat = m_numComponents == 4 ? GL_RGBA16F : GL_RGB16F; internalFormat = m_numComponents == 1 ? GL_R16F : internalFormat; type = GL_FLOAT; break; case 4: // floating point buffer internalFormat = m_numComponents == 4 ? GL_RGBA32F : GL_RGB32F; internalFormat = m_numComponents == 1 ? GL_R32F : internalFormat; type = GL_FLOAT; break; } // select format from RGBA, RGB or RED GLenum format = m_numComponents == 4 ? GL_RGBA : GL_RGB; format = m_numComponents == 1 ? GL_RED : m_numComponents; Bind(); glTexSubImage2D(GL_TEXTURE_2D, 0, (GLint)xOffset, (GLint)yOffset, (GLsizei)width, (GLsizei)height, format, type, (GLvoid*)data ); UnBind(); } /* --- private --- */ void Texture2D::loadTextureData(const char* textureFile) { FILE* file = NULL; file = fopen( textureFile, "rb"); if(file == NULL) { BREAKPOINT(Texture could not be loaded!); return; } int x; int y; int numComponents; m_textureData = stbi_load_from_file( file, &x, &y, &numComponents, 0); fclose(file); m_width = x; m_height = y; m_numComponents = numComponents; m_componentSizeInBytes = 1; m_isLocalDataAvailable = true; } }
{ "redpajama_set_name": "RedPajamaGithub" }
6,369
Sam Curran takes 18th hat-trick in IPL history, gives Kings XI Punjab unlikely win Sam Curran Took The 18th Hat-trick In The History Of The Indian Premier League And Became The Third Kings XI Punjab Player To Achieve This Feat. Reported By : News Nation Bureau | Edited By : Siddharth Vishwanathan | Updated: 02 April 2019, 11:52:13 AM Sam Curran became the first England player to take a hat-trick in the Indian Premier League. (Image credit: Cricket Australia Twitter) Sam Curran became the first England player to take a hat-trick in IPL 2019. Curran became the third Kings XI Punjab player to take a hat-trick in IPL. Curran is the sixth overseas player to take a hat-trick in the IPL. Delhi Capitals were cruising towards victory in the game against Kings XI Punjab at the IS Bindra stadium in Mohali. However, Mohammed Shami picked up two wickets to give Kings XI Punjab some hope. It was Sam Curran, though, who changed the entire complexion of the game. He picked up the wicket of Colin Ingram for 39 and snapped up Harshal Patel (0). When Shami sent back Hanuma Vihari for 2, the collapse was well and truly on for Delhi Capitals. With Curran picking up the wickets of Kagiso Rabada and Sandeep Lamichhane, the 20-year-old England all-rounder picked up the 18th hat-trick in the history of the IPL. Curran became the youngest to take a four-wicket haul in the IPL and his spell of 4/11 helped Kings XI Punjab secure an unlikely 14-run win against Delhi Capitals at Mohali. The defeat of Delhi Capitals led to many social media users trolling the team mercilessly. Some commented that the presence of many South African players in the side resulted in this epic choke. One user went to the extreme to say this was not the first time Delhi had choked because of Punjab, in reference to the stubble burning in Punjab which led to increased pollution levels in Delhi. However, all the attention was on Curran, who had impressed for England in the five-Test series against India in 2018 which England won 4-1. Laxmipathy Balaji is the first bowler to take a hat-trick in the Indian Premier League, having achieved the feat against Kings XI Punjab for Chennai Super Kings in the 2008 edition of the tournament. Yuvraj Singh became the first bowler to take two hat-tricks in the same edition of the tournament. Playing for Kings XI Punjab in the 2009 IPL in South Africa, Yuvraj took a hat-trick against Royal Challengers Bangalore and Deccan Chargers. However, the record for the most hat-tricks in the IPL belongs to legspinner Amit Mishra. Playing for Delhi Daredevils in the 2008 edition, Mishra secured his first hat-trick against the Deccan Chargers. In 2011, playing for the Deccan Chargers, Mishra matched Yuvraj's feat and took a hat-trick against Kings XI Punjab. Mishra created history while playing for Sunrisers Hyderabad when he took a hat-trick against Pune Warriors in the 2012 edition of the IPL, making him the first player to take a hat-trick of hat-tricks. Curran became the third Kings XI Punjab player after Yuvraj and Axar Patel to take a hat-trick. The Gujarat left-arm spinner took three consecutive wickets against Gujarat Lions in the 2016 IPL. Curran is the sixth overseas player and the first England player to take a hat-trick in the IPL. Makhaya Ntini was the first foreign player to take a hat-trick playing for Chennai Super Kings against Kolkata Knight Riders in the first IPL. Sunil Narine, playing for Kolkata Knight Riders, took a hat-trick against Kings XI Punjab in the 2013 IPL while Shane Watson, playing for Rajasthan Royals, achieved the feat against Sunrisers Hyderabad in IPL 2014. Samuel Badree and Andrew Tye took hat-tricks for Royal Challengers Bangalore and Gujarat Lions respectively in the 2017 IPL. The unique thing was that the feat was achieved on the same day. Curran's feat has only enhanced his reputation as one of the most promising youngsters in the game currently. For all the Latest Sports News News, Download News Nation Android and iOS First Published : 02 April 2019, 11:51:36 AM Sam Curran hat-trick gives Kings XI Punjab sensational win over Delhi Capitals IPL 2019 Auction: Jaydev Unadkat, Varun Chakaravarthy and Sam Curran steal the show IPL 2019 Auction: Sam Curran gets big bucks, goes to Kings XI Punjab for Rs 7.2 crore Sam Curran Hat-Trick Kings XI Punjab Delhi Capitals Kings XI Punjab Vs Delhi Capitals IPL 2019 Indian Premier League 2019 Mohali Yuvraj Singh
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,362
{"url":"https:\/\/quantnet.com\/threads\/question-about-late-registrar-in-esims.1127\/","text":"# Question about late registrar in eSims\n\n#### SRex\n\nI registrar late on eSims for the spring 2008 semester, so I was charged $18 of late fee and I paid it. However now I want to switch a class with another one that has a better teacher, would that mean I'll have to charged again with the late fee? #### Andy Nguyen ##### Member It's only$18 to get a better teacher. Too cheap to even think about it.\nYou may not have to pay again. Only one way to find out.","date":"2020-02-26 20:58:11","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.25660890340805054, \"perplexity\": 2330.24786201702}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875146485.15\/warc\/CC-MAIN-20200226181001-20200226211001-00109.warc.gz\"}"}
null
null
from django.db import models # Create your models here. class Employee(models.Model): title=models.TextField() text=models.TextField()
{ "redpajama_set_name": "RedPajamaGithub" }
1,760
/* * RTL Style Sheet */ .wk-slideshow-showcasebuttons .slides-container:hover .next { right: auto; left: 30px; background-position: 0 -50px; } .wk-slideshow-showcasebuttons .slides-container:hover .prev { left: auto; right: 30px; background-position: 0 0; } .wk-slideshow-showcasebuttons .wk-slideset > div .next { right: auto; left: 25px; background-position: 0 -90px; } .wk-slideshow-showcasebuttons .wk-slideset > div .prev { left: auto; right: 25px; background-position: 0 0; } .wk-slideshow-showcasebuttons .wk-slideset > div .next:hover { background-position: 0 -120px; } .wk-slideshow-showcasebuttons .wk-slideset > div .next:active { background-position: 0 -150px; } .wk-slideshow-showcasebuttons .wk-slideset > div .prev:hover { background-position: 0 -30px; } .wk-slideshow-showcasebuttons .wk-slideset > div .prev:active { background-position: 0 -60px; }
{ "redpajama_set_name": "RedPajamaGithub" }
8,993
Q: Como parsear correctamente en C# Buscando parseo, encontré lo siguiente: Int32.Parse(string); Convert.ToInt32(string); Mi consulta es la siguiente: Cual es la mejor manera de parsear, que sea mas efectiva y el por que. En que caso se ocupa una forma, y en que caso otra A: Mencioné en un comentario hace un rato que "Ambas hacen lo mismo, salvo que arrojan distintas excepciones en base al valor que 'parsean'." Dicho esto (fuente), tienes lo siguiente: string convertToInt = "12"; string nullString = null; string maxValue = "32222222222222222222222222222222222"; string formatException = "12.32"; int parseResult; // Convertirá correctamente. parseResult = int.Parse(convertToInt); // Arrojará NullException. parseResult = int.Parse(nullString); // Arrojará OverflowException . parseResult = int.Parse(maxValue); // Arrojará FormatException. parseResult = int.Parse(formatException); // Lo mismo, utilizando Convert.ToInt32 // Funcionará perfecto. parseResult = Convert.ToInt32(convertToInt); // Retornará Cero si el string es null. parseResult = Convert.ToInt32(nullString); // Arrojará OverflowException parseResult = Convert.ToInt32(maxValue); // Arrojará FormatException parseResult = Convert.ToInt32(formatException); Puedes probar un Fiddle aquí. :) EDIT: Convert.ToInt32() tiene distintas sobrecargas que permiten castear prácticamente cualquier cosa a int, sin embargo a int.Parse() sólo puedes pasarle string, arrojando las respectivas excepciones mencionadas anteriormente en cadad caso, mencionando que int.Parse() posee varias sobrecargas para especificar el formato del número. EDIT 2: Al final, muy muy en el fondo, ambas implementaciones llaman a un método llamado Number.Parse que es interno de .NET Framework, vease el código fuente de ambas funciones más abajo1: // Int32.Parse(String s) Implementación. [Pure] public static int Parse(String s) { return Number.ParseInt32(s, NumberStyles.Integer, NumberFormatInfo.CurrentInfo); } // Convert.ToInt32(String value) implementación. public static int ToInt32(String value) { if (value == null) return 0; return Int32.Parse(value, CultureInfo.CurrentCulture); } Lo que deja dicho, a nivel de llamadas, sí, puede que Int32.Parse(string) sea más rápido, dado que ahorra una llamada interna, no como Convert.ToInt32(object), que primero evalua el objeto a convertir y luego llama al método Parse(). 1: Código fuente de ambas implementaciones gracias a Reference Source Vease: Convert.ToInt32(string) e Int32.Parse(string). A: Ambas formas son validas, una la diferencia que veo es que el Convert.ToInt32() permite asignar varios tipos de datos como parametro Convert.ToInt32 mientras que el Int32.Parse() solo puede pasarle un string Int32.Parse (String) ambas son validas Si tienes el Int32.TryParse() por si quieres validar la converison sin recibir una Exception
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,384
Liga Catolică a fost o alianță a unor state catolice din Europa fondată în 1609 la München, ca răspuns la formarea Uniunii Protestante din 1608. Țările care au participat la alianță au fost: Sfântul Imperiu Roman de Națiune Germană (Austria, Bavaria), Spania, Portugalia, Uniunea Polono-Lituaniană, iar din 1643 până în 1645, Danemarca-Norvegia. Războiul de Treizeci de Ani Antiprotestantism Fondări în 1609
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,768
T("*") ====== {arkr}{stereo} Multiply signals ## Description ## en: `T("*")` is a signal multiplier operator that outputs a signal which is the multiplication all input signals. ja: `T("*")` はインプットの信号を積算して出力します。 ```timbre var param = T("param").linTo(1, "10sec").on("ended", function() { synth.pause(); }); var osc = T("osc", {wave:"saw", mul:0.25}); var synth = T("*", osc, param).play(); ``` ## Source ## https://github.com/mohayonao/timbre.js/blob/master/src/objects/mul.js
{ "redpajama_set_name": "RedPajamaGithub" }
346
\section{Introduction} \label{Sec:Intro} Consider $\mathbb{C}^{n}$ with coordinates $z=(x_1+iy_1,\dots,x_n+iy_n)$, Liouville form $\theta=\sum_{j=1}^{n}y_jdx_j$, and symplectic form $\omega=-d\theta$. An immersion $f\colon K\to\mathbb{C}^{n}$ of an $n$-manifold $K$ is \emph{Lagrangian} if $f^{\ast}\omega=0$, and \emph{exact Lagrangian} if the closed form $f^{\ast}\theta$ is also exact, i.e.~$f^{\ast}\theta=dz$ for some function $z\colon K\to\mathbb{R}$. Gromov proved that there is an $h$-principle for exact Lagrangian immersions \cite{Gromov:h-principle}, which are therefore flexible geometric objects: every closed $n$-manifold $K$ for which $TK \oplus TK$ is trivial admits such an immersion in $\mathbb{C}^n$. By contrast, exact Lagrangian immersions with a fixed number of double points display interesting rigidity phenomena. For example, Gromov's classical result \cite{Gromov} that no closed manifold admits an exact Lagrangian embedding into $\mathbb{C}^{n}$ can be viewed as a statement about exact Lagrangian immersions with no double points. Our main result provides another striking illustration of such rigidity: \begin{Theorem}\label{Thm:Main} Let $K$ be a closed orientable $2k$-manifold, $k>2$, with Euler characteristic $\chi(K)\ne -2$. If $K$ admits an exact Lagrangian immersion $f\colon K\to\mathbb{C}^{2k}$ with one transverse double point and no other self intersections, then $K$ is diffeomorphic to the standard sphere $S^{2k}$. \end{Theorem} \begin{Example} The \emph{Whitney sphere} is the exact Lagrangian immersion $w\colon S^n \to\mathbb{C}^n$ where \begin{equation} \label{Eqn:Whitney} S^n = \left\{ (x,y) \in \mathbb{R}^n \times \mathbb{R} \ \big | \, \ |x|^2 + y^2 = 1\right\} \quad\text{and}\quad w(x,y)=(1+iy)x \in \mathbb{C}^n. \end{equation} It has exactly one transverse double point, $w(0,1)=w(0,-1)$. \end{Example} Recall that twice the algebraic number of double points of an immersion of a closed orientable $2k$-manifold into $\mathbb{C}^{2k}$ equals the negative of the Euler number of its normal bundle. For Lagrangian immersions the normal and tangent bundles are isomorphic, so in the situation of Theorem \ref{Thm:Main} $\chi(K)=\pm 2$. The choice of primitive $f^{\ast}\theta = dz$ defines an embedding $f\times z\colon K \rightarrow \mathbb{C}^n \times \mathbb{R}$ which is Legendrian with respect to the contact form $dz-\theta$. The assumption $\chi(K)\ne -2$ ensures that the Legendrian homology of this lift can be linearized, which in turn implies that $f$ satisfies a version of the Morse inequalities for double points of exact Lagrangian immersions originally conjectured by Arnol'd, see \cite{Ekholm, EESori}. A proof of the Arnol'd conjecture for general exact Lagrangian immersions would remove this extra hypothesis in Theorem \ref{Thm:Main}. The Morse inequalities, in combination with results of Damian \cite{Damian}, show that $K$ is a homotopy sphere, $K\approx\Sigma$. The substance of Theorem \ref{Thm:Main} is the construction of a parallelizable manifold with boundary $\Sigma$, which in even dimensions is sufficient to exclude all the exotic spheres \cite{KM}. Any homotopy $n$-sphere $\Sigma$ has smooth (non-Lagrangian) immersions into $\mathbb{C}^{n}$ with exactly one transverse double point, and the $h$-principle implies that $\Sigma$ admits some exact Lagrangian immersion into $\mathbb{C}^{n}$. Moreover, \emph{inexact} Lagrangian immersions with a single double point are fairly common (take any Lagrangian immersion and remove all but one of the double points by surgery). Thus, the Euler characteristic assumption aside, Theorem \ref{Thm:Main} is ``sharp". It gives non-trivial information in any dimension where there are exotic $2k$-spheres. (The numbers of smooth homotopy spheres in the lowest relevant dimensions are: 1 in dimension 6, 2 in dimension 8, 6 in dimension 10, 1 in dimension 12, and 2 in dimension 14. For general dimension $2k$, the number is finite but can be arbitrarily large \cite{KM}.) Theorem \ref{Thm:Main} implies that any Lagrangian immersion of an exotic $2k$-sphere has at least three double points. We remark that any exotic sphere admits Morse functions with exactly two critical points \cite{SmalePoincare}, so Theorem \ref{Thm:Main} detects a phenomenon that goes beyond Morse theory, and presumably beyond its Floer homological counterparts. \subsection{About the proof of Theorem \ref{Thm:Main}} The construction of a parallelizable filling has two central steps. First, an argument going back to Gromov \cite{Gromov} and Oh \cite{Oh} yields a non-compact cobordism with boundary a given Lagrangian submanifold of $\mathbb{C}^n$; this cobordism is built from solutions to perturbed Cauchy-Riemann problems. Second, via techniques introduced by Fukaya, Oh, Ohta and Ono \cite{FO3}, such a cobordism can be compactified and capped off using fibered products of moduli spaces and abstract chains bounding such spaces. Such a strategy was spectacularly implemented in a similar context by Abouzaid \cite{Abouzaid}, following suggestions of Seidel. A more precise description of the argument is as follows. Given $f\colon K\to \mathbb{C}^{2k}$ as in Theorem \ref{Thm:Main}, we apply Lagrange surgery to create a monotone embedded smooth Lagrangian submanifold $L \subset \mathbb{C}^{2k}$ of minimal Maslov number $2k$. By \cite{Damian}, $L$ then fibers over $S^{1}$ with fiber a homotopy $(2k-1)$-sphere. We study a $1$-parameter family of Floer equations (perturbed Cauchy-Riemann equations) with Lagrangian boundary condition $L$. The corresponding moduli space $\mathcal{F}(0\beta)$ of Floer holomorphic disks in the trivial relative homotopy class is used to explicitly construct a bounding manifold $\mathcal{B}$ with $\partial \mathcal{B}=L$ and with stably trivial tangent bundle. This construction is somewhat involved (the subsequent notation is that which appears in the body of the paper): $\mathcal{F}(0\beta)$ is non-compact because of bubbling and has a compactification with boundary consisting of broken curves. In the case we study there is only one possible bubbling configuration, and the boundary $\mathcal{N}$ is a fibered product of two other moduli spaces, $\mathcal{N}=\mathcal{F}^*(-\beta) \times_L \mathcal{M}^*(\beta)$. Employing gluing analysis, we find a neighborhood of the boundary in $\mathcal{F}(0\beta)$, the complement of which is a compact $C^{1}$-smooth manifold $\mathcal{F}_{\rho_0}(0\beta)$ with $\partial\mathcal{F}_{\rho_0}(0\beta)=L\cup \mathcal{N}$, as well as an explicit collar neighborhood of $\mathcal{N}\subset\mathcal{F}_{\rho_0}(0\beta)$. Using the fibered product with one factor a manifold $\mathcal{D}$ filling $\mathcal{F}^{\ast}(-\beta)$ and the other $\mathcal{M}^{\ast}(\beta)$, we are able to fill the boundary $ \mathcal{N} \subset \partial \mathcal{F}_{\rho_0}(0\beta)$ and thereby create the cobordism $\mathcal{B}=\mathcal{F}_{\rho_0}(0\beta)\cup (\mathcal{D}\times_{L}\mathcal{M}^{\ast}(\beta))$. We next analyze the stable tangent bundle of $\mathcal{B}$. The stable tangent bundles of $\mathcal{F}(0\beta)$, $\mathcal{F}^{\ast}(-\beta)$, and $\mathcal{M}^{\ast}(\beta)$ are all restrictions of index bundles over the product of the space of smooth maps $(D,\partial D)\to(\mathbb{C}^{2k},L)$, where $D$ is the $2$-disk, and a half-line. Here the Fredholm problem at a point $(u,r)$ is a Cauchy-Riemann operator with Lagrangian boundary condition given by the Lagrangian tangent planes of $L$ along $u|_{\partial D}$. We view $L$ as a piecewise linear ({\sc pl}) embedding, which allows us to study these index bundles very explicitly. First, $L$ is {\sc pl}-homeomorphic to the manifold $W'$ obtained by Lagrange surgery on the Whitney sphere and, being embedded in double dimension, there is an ambient {\sc pl} isotopy taking $L$ to $W'$. Second, the isotopy can be covered by a homotopy of (stable) Lagrangian Gauss maps into the Lagrangian Grassmannian. Consequently, if an index bundle associated to $W'$ is stably trivial, then so is the corresponding bundle associated to $L$. Restriction to the boundary gives a homotopy equivalence from the space of maps $(D,\partial D)\to (\mathbb{C}^{2k},W')$ to the free loop space of $W'\approx S^{1}\times S^{2k-1}$. Using the energy functional of the standard metric on $S^{2k-1}$, we get a $(6k-7)$-skeleton for the free loop space, over which we give an explicit trivialization of the index bundles. This shows that the tangent bundles of $\mathcal{F}(0\beta)$, $\mathcal{F}^{\ast}(-\beta)$, and $\mathcal{M}^{\ast}(\beta)$ are all stably trivial. In order to relate trivializations of these three bundles near the boundary of $\mathcal{F}(0\beta)$ we introduce ``coherent trivializations'', generalizing the more familiar idea of coherent orientations, and prove existence in the case under study. This allows us to reduce the question of extension of the stable trivialization over $\mathcal{F}_{\rho_0}(0\beta)$ to the cap $\mathcal{D}\times_{L}\mathcal{M}^{\ast}(\beta)$ to a problem about spin structures. We prove the spin problem is not obstructed, and conclude that $T\mathcal{B}$ is stably trivial. Finally, adding a $2$-handle to $\mathcal{B}$, we produce a parallelizable filling for the homotopy sphere $K$. The filling argument just outlined has much in common with the argument used by Abouzaid in \cite{Abouzaid}, where he started from an embedding $T^*S^{4k+1} \to \mathbb{C}\mathbb{P}^{2k} \times \mathbb{C}^{2k+1}$, and a Hamiltonian isotopy displacing the image. The perturbed family of Cauchy-Riemann problems underlying Theorem \ref{Thm:Main} is analogously obtained from a Hamiltonian isotopy which displaces $L$ from itself. In Abouzaid's case, there were two bubble configurations, leading to additional complications. His filling was accordingly built from a smooth structure constructed on a CW-approximation of the actual moduli space of Floer disks. Our argument is more direct, and gives a $C^{1}$-structure on the compactified moduli space itself. \subsection{Lagrangian embeddings} Because most of the argument for Theorem \ref{Thm:Main} is carried out on the embedded Lagrange surgery, the method of proof also gives results for Lagrangian embeddings, provided a homotopy obstruction arising from the stable Lagrangian Gauss map vanishes. If the dimension is divisible by 8 then the homotopy group in which the obstruction lies is zero. \begin{Corollary} \label{Cor:First} If $k\ge 1$, then a monotone Lagrangian submanifold $L\subset \mathbb{C}^{8k}$ of minimal Maslov number $8k$ bounds a parallelizable manifold. \end{Corollary} Any such manifold is known \cite[Theorem 1.7]{Damian} to be a fiber bundle over $S^1$ with fiber a homotopy $(8k-1)$-sphere $Q$. For general homotopy spheres $Q$, the mapping class group $\pi_0 \textrm{Diff}(Q)$ is unknown. However, if the fiber is the standard sphere, $Q \approx S^{8k-1}$, then the mapping torus (being orientable) is diffeomorphic to a connected sum $(S^1 \times S^{8k-1}) \# \Sigma^{8k}$ for some homotopy $8k$-sphere $\Sigma$, cf.~Remark \ref{Rem:Cerf}, and such a manifold framed bounds only if $\Sigma$ is diffeomorphic to $S^{8k}$. Thus, Corollary \ref{Cor:First} implies that such a monotone $L$ is either the obvious product of standard spheres, or an exotic sphere bundle. To state a further result, let $P=S^{1}\times S^{8k-1}$, $k\geq 1$, and recall that \cite[Theorem 1(c)]{SchultzA} any smooth manifold homotopy equivalent to $P$ is diffeomorphic to $P\#\Sigma$ for some homotopy sphere $\Sigma$ and that $P\#\Sigma$ is diffeomorphic to $P$ if and only if $\Sigma$ is diffeomorphic to $S^{8k}$, \cite[Theorem A]{Schultz}. \begin{Corollary} \label{Cor:Main} If $k\geq 1$ and $\Sigma$ is a homotopy $8k$-sphere, then the cotangent bundles $T^{\ast}P$ and $T^{\ast}(P\#\Sigma)$ are symplectomorphic if and only if $\Sigma \approx S^{8k}$ is the sphere. \end{Corollary} We remark that $T^*(P\# \Sigma)$ and $T^*P$ \emph{are} diffeomorphic, cf.~Remark \ref{Rem:Diffeomorphic}. Corollary \ref{Cor:Main} is analogous to Abouzaid's Theorem \cite{Abouzaid} which says that if $\Sigma$ is a homotopy $(4k+1)$-sphere which does not bound a parallelizable manifold then $T^*\Sigma$ is not symplectomorphic to $T^*S^{4k+1}$. It contributes to recent progress on Arnold's ``nearby Lagrangian submanifold" conjecture \cite{Arnold:firststeps}, which asserts in particular that $T^*M$ and $T^*N$ are symplectomorphic only when $M$ and $N$ are diffeomorphic. \subsection{Gluing results and the odd-dimensional case} The second half of the paper contains the gluing analysis needed to obtain $C^1$-smooth structures (and hence tangent bundles) on the compactified moduli space and on $\mathcal{B}$. Constructions of such $C^1$-structures are complicated by the fact that holomorphic disks bubbling off have automorphisms. We overcome these complications by using intersections with ambient hypersurfaces to stabilize the domains of the bubbles. In order to control these intersection points as the disks vary in the moduli space, we are led to study moduli spaces with jet conditions. To facilitate that study we make the Lagrangian $L$ real analytic and take the almost complex structure standard in a neighborhood of $L$. This provides standard local solutions at boundary points with arbitrary $m$-jet for any $m\ge 0$, which allow us to deal with jet-conditions using essentially finite dimensional techniques. Because we work in $\mathbb{C}^n$, and the holomorphic bubble disks all realize primitive relative homotopy classes, our analytical arguments are rather explicit. In particular, geometric arguments suffice for transversality, there is no need for abstract perturbations, and there are no orbifold complications. The gluing analysis applies to a Lagrangian embedding $L \to \mathbb{C}^n$ of minimal Maslov number $n$ and with $\pi_1(L)=\mathbb{Z}$, with no assumption on the parity of $n$. If $n$ is odd, however, the topological side of the story is complicated at various places. The original Lagrange immersion $f\colon K \to \mathbb{C}^n$ could in principle have a unique double point of Legendrian homology grading (Maslov index) $1$, rather than $n$, in which case Lagrange surgery would not give $L$ with minimal Maslov number $n$ and the Legendrian homology would not be linearizable. Even if the Lagrangian $L$ obtained by surgery has Maslov index $n$, the bounding manifold obtained from spaces of holomorphic disks cannot be parallelizable since $L$ is not orientable; the boundary of the orientation double cover of the filling could contain two canceling copies of an exotic sphere with opposite orientations. In spite of this, it would be interesting to study the implications of the gluing result in the odd dimensional case, and we hope to address this elsewhere. \subsection{Organization of the paper} Section \ref{Sec:Top+FH} recalls the Floer homological arguments which imply, in the situation of Theorem \ref{Thm:Main}, that $K \approx \Sigma$ is a homotopy sphere. Section \ref{Sec:bounding} constructs a parallelizable bounding manifold for $\Sigma$, subject to establishing a suitable analytical framework for the relevant moduli spaces, and finishes with the proofs of Theorem \ref{Thm:Main} and its corollaries. The underlying analysis is deferred to Sections \ref{Sec:basicsetup} - \ref{Sec:gluing2}. Whilst much of this is encompassed by standard pseudo holomorphic curve theory, we include enough background to give an essentially complete proof of the notable exception, Theorem \ref{Thm:gluing}, which constructs a $C^1$-structure on a certain compactified moduli space of Floer holomorphic disks. Finally, Section \ref{Sec:IndexBundle} discusses index bundles over the loop space of $S^{1}\times S^{2k-1}$, and constructs coherent trivializations. \smallskip {\em Acknowledgements.} T.E.~is partially supported by the G\"oran Gustafsson Foundation for Research in Natural Sciences and Medicine. I.S.~is partially supported by European Research Council grant ERC-2007-StG-205349. The authors are indebted to Mohammed Abouzaid and Paul Seidel for helpful conversations. \section{Topological restrictions from Floer homology} \label{Sec:Top+FH} In this section we discuss topological constraints on exact Lagrangian immersions with a single double point that can be derived from Floer homological arguments. \subsection{Legendrian homology} Let $f\colon K\to \mathbb{C}^{n}$ be an exact Lagrangian immersion of an orientable $2k$-manifold with a single transverse double point. As explained in Section \ref{Sec:Intro} there is a Legendrian lift $\tilde f=f\times z\colon K\to\mathbb{C}^{n}\times\mathbb{R}$. In this case $\tilde f$ is necessarily (and not just generically) an embedding. To see this, note that if $\tilde{f}$ was an immersion, there could be no holomorphic disk with boundary on $K$ (its area would be given by a positive multiple of the length of the Reeb chord of the Legendrian lift, which vanishes if the lift still has a double point). The Legendrian homology of $K$, or of two parallel copies $K\sqcup K$ of $K$, could then be linearized; the linearized Legendrian homology of $K \sqcup K$ would equal $H_\ast(K)$ by \cite{EESa} but this is impossible ($K$ is displaceable so the linearized homology must vanish). Since $K$ is orientable the Maslov class of $K$ is even and we can define a $\mathbb{Z}_2$-grading on the Reeb chords of $\tilde f$, see \cite{EES1}. If $a$ is a Reeb chord then we write $|a|_2\in\mathbb{Z}_2=\{0,1\}$ for this grading. Furthermore, in case the Maslov class of $K$ vanishes then we get a well defined $\mathbb{Z}$-grading $|a|$ of Reeb chords $a$. Assume now that $n=2k$. Then we have the following relation between Reeb chord gradings and Euler characteristic. \begin{Lemma} Let $g\colon M\to\mathbb{C}^{2k}$ be a self transverse exact Lagrangian immersion of an orientable closed $2k$-manifold such that the Legendrian lift $\tilde g$ of $g$ is an embedding. Let $\mathcal{Q}$ denote the set of Reeb chords of $\tilde g$. Then the Euler characteristic $\chi(M)$ of $M$ satisfies \[ \chi(M)=2\sum_{c\in\mathcal{Q}}(-1)^{|c|_2}. \] In particular, if $f\colon K\to\mathbb{C}^{2k}$ is an exact Lagrangian immersion with exactly one double point then $\chi(K)=\pm 2$, where the sign is positive if the unique Reeb chord $a$ of $\tilde f$ satisfies $|a|_2=0$ and negative if $|a|_2=1$. \end{Lemma} \begin{proof} The Euler characteristic formula is straightforward, see \cite[Equation (3.2) and Proposition 3.2 (2)]{EES2}, and the last statement is an immediate consequence of it. \end{proof} Consider now a self transverse exact Lagrangian immersion $f\colon K\to\mathbb{C}^{2k}$ with a single double point and corresponding unique Reeb chord $a$ of the Legendrian lift $\tilde f$. If $|a|_2=1$ then, in principle, one could have the relation $\partial a =1$ in the Legendrian DGA. This would then not be linearizable, and cannot be used to draw conclusions about the topology of $K$ (this situation corresponds to the Lagrangian Floer homology of the immersed Lagrangian being \emph{obstructed}). If on the other hand $|a|_2=0$ then we have the following. \begin{Lemma}\label{Lem:Maslov=0} Let $f\colon K\to\mathbb{C}^{2k}$ be an exact Lagrangian immersion such that the unique Reeb chord $a$ of the Legendrian lift $\tilde f$ satisfies $|a|_2=0$. Then $K$ is a $\mathbb{Z}$-homology sphere. Consequently, its Maslov class vanishes, and moreover $|a|=2k$. \end{Lemma} \begin{proof} Consider first $\mathbb{Z}_{2}$-coefficients; $|a|_2=0$ implies that the Legendrian DGA of $\tilde f$ is linearizable. It then follows from the $\mathbb{Z}_2$-graded Morse inequalities for linearized Legendrian homology over $\mathbb{Z}_{2}$, see \cite[Theorem 1.2]{EESa} that $\dim(H_\ast(K;\mathbb{Z}_2))\le 2$. On the other hand $\dim(H_0(K;\mathbb{Z}_2))=\dim(H_{2k}(K;\mathbb{Z}_2))=1$ and we conclude that $H_{j}(K;\mathbb{Z}_2)=0$ for $j\ne 0,2k$. In particular, $H^{1}(K;\mathbb{Z}_2)=0=H^{2}(K;\mathbb{Z}_2)$, so $K$ is spin and we can define the Legendrian homology DGA with arbitrary field coefficients. Repeating the above argument with $\mathbb{Q}$-coefficients shows that $H^{1}(K;\mathbb{Q})=0$. Hence the Maslov class of $f$ vanishes and there is a $\mathbb{Z}$-grading on the linearized Legendrian homology. Repeating the argument again with $\mathbb{Z}_p$ coefficients, $p$ prime shows that $H_j(K;\mathbb{Z})=0$, $j\ne 0,2k$, and then \cite[Theorem 5.5]{EESa} implies that $|a|=2k$. \end{proof} \subsection{Lagrange surgery} Let $f\colon K \to \mathbb{C}^{2k}$ be a Lagrangian immersion with a unique transverse double point and Legendrian lift $\tilde f=f\times z$, and suppose the conditions of Lemma \ref{Lem:Maslov=0} hold. We write $\{p_{+},p_{-}\}\subset K$ for the preimage of the double point, $f(p_+)=f(p_-)$, and choose notation so that $z(p_+)>z(p_-)$. There is a surgery procedure which resolves the double point \cite{Polterovich}. It gives a Lagrangian embedding of $K$ with a $1$-handle attached, $K\# (S^{2k-1}\times S^{1})$. In fact, there are two ways of adding the handle, which lead to Lagrangian non-isotopic submanifolds. To simplify our discussion of this surgery procedure we first note that after Hamiltonian isotopy we may assume that the double point is located at $0\in\mathbb{C}^{2k}$ and that near $0$ the two sheets of $f(K)$ agree with \begin{equation}\label{eq:rightangles} \Gamma \ = \ \mathbb{R}^{2k} \cup i\mathbb{R}^{2k} \ \subset \ \mathbb{C}^{2k}. \end{equation} Since the Maslov class of $f$ vanishes we can define a \emph{phase function} $\phi\colon K\to \mathbb{R}$ which measures local contributions to the Maslov index and is unique up to additive constant, see \cite{Seidel:graded} or \cite[p.6]{BEE}. In terms of this function we have \begin{equation}\label{Eq:phasegrading} |a|=\phi(p_-)-\phi(p_+)+k-1. \end{equation} Lagrange surgery is defined locally. Consider $\Gamma$ as above and let $x+iy$ be the standard coordinate on $\mathbb{C}$. Write \[ Q_{\pm}^{\ast}=\{x+iy\colon x\ge 0,\; \pm y\ge 0,\; x+iy\ne 0\} \] Fix smooth embedded paths $\gamma_\pm\colon\mathbb{R}\to Q_{\pm}^{\ast}$ which satisfy \[ \gamma_\pm(t) = \begin{cases} - t &\text{for }t < -\epsilon,\\ \pm it &\text{for }t > \epsilon. \end{cases} \] Thinking of $S^{2k-1}$ as the unit sphere in $\mathbb{R}^{2k}$ we define the two \emph{Lagrange handles} $H_{\pm}$ as follows: \begin{equation}\label{eq:Lhandle} H_\pm = \bigcup_{t\in\mathbb{R}} \gamma_\pm(t)\cdot S^{2k-1} \subset \mathbb{C}^{2k}, \end{equation} where $\zeta\cdot$ denotes multiplication by the complex scalar $\zeta$. Then $H_{\pm}$ are embedded Lagrangian submanifolds diffeomorphic to $S^{2k-1} \times \mathbb{R}$ and co-inciding with $\Gamma$ outside of a ball of radius $2\epsilon$ around the origin. As above there is a phase function $\phi\colon H_{\pm}\to\mathbb{R}$ which is unique up to additive constant. \begin{Lemma}\label{Lem:LhandleMaslov} Let $\phi\colon H_{\pm}\to\mathbb{R}$ be the phase function which equals $0$ on $H_{\pm}\cap\mathbb{R}^{2k}$. Then \begin{enumerate} \item on $H_+$, $\phi(\eta)=k-1$ for $\eta\in i\mathbb{R}^{n}\cap H_+$, and \item on $H_-$, $\phi(\eta)=-(k-1)$ for $\eta\in i\mathbb{R}^{n}\cap H_-$. \end{enumerate} \end{Lemma} \begin{proof} Consider the tangent planes along the path $\gamma_{\pm}(t)\cdot v$, for $v\in S^{2k-1}$. On the subspace of the tangent space perpendicular to $v$ this is just multiplication by $\gamma_{\pm}(t)$ which contributes $\pm\frac{2k-1}{2}$ to the phase function on $H_{\pm}$. The tangent vector $\dot\gamma_{\pm}(t)\cdot v$ makes a quarter rotation of sign $\mp$ on $H_{\pm}$, which contributes $\mp\frac{1}{2}$ to the phase function. \end{proof} We next discuss Lagrange surgery on $f\colon K\to\mathbb{C}^{2k}$. Choose coordinates so that $p_-$ lies in the $\mathbb{R}^{2k}$-sheet of $f$ and $p_-$ in the $i\mathbb{R}^{2k}$-sheet at the double point $0$ of $f$. Removing $3\epsilon$-disks in $\mathbb{R}^{2k}$ and $i\mathbb{R}^{2k}$ around $0$ and replacing them by the compact part of $H_{\pm}$ bound by the spheres of radius $3\epsilon$ in $\mathbb{R}^{2k}$ and $i\mathbb{R}^{2k}$, we construct two embedded Lagrangian submanifolds $L_{\pm}$. Topologically, $L_{\pm}$ is $K$ with a $1$-handle attached. In particular, $H_1(L_{\pm};\mathbb{Z})\cong\mathbb{Z}$. If $L\subset \mathbb{C}^{n}$ is a Lagrangian submanifold, let $\mu_{L}\in\mathbb{Z}_{\ge 0}$ denote its minimal Maslov number. \begin{Lemma}\label{Lem:Maslovnumber} Let $f\colon K\to\mathbb{C}^{2k}$ be a Lagrangian immersion as in Lemma \ref{Lem:Maslov=0} and let $L_{\pm}$ be constructed by Lagrange surgery on $f$, as described above. Then \[ \mu_{L_+}=2k\quad\text{ and }\quad \mu_{L_-}=2. \] \end{Lemma} \begin{Remark} Note that $L_{\pm}$ is orientable if and only if $\mu_{L_{\pm}}$ is even. Hence, $L_{\pm}$ is orientable. \end{Remark} \begin{proof} Observe that the Maslov index of the loop which starts in the $i\mathbb{R}^{2k}$-sheet of $K$ at $p'_+$ in the $3\epsilon$-sphere around $p_+$, follows $K$ to $p'_-$ in the $3\epsilon$-sphere around $p_-$ in the $\mathbb{R}^{2k}$-sheet, and then connects $p'_-$ to $p'_+$ across the added handle $H_{\pm}$ equals \[ \phi_{K}(p'_-)-\phi_{K}(p'_+)-\phi_{H_{\pm}}(p'_-)+\phi_{H_{\pm}}(p'_+), \] where $\phi_{L}$ denotes a phase function of the Lagrangian $L$. The lemma then follows from \eqref{Eq:phasegrading} and Lemma \ref{Lem:LhandleMaslov}. \end{proof} \begin{Example}\label{Ex:LefschetzWhitney} Let $\pi\colon \mathbb{C}^{2k} \to \mathbb{C}$ be the model Lefschetz fibration \[ \pi(z_1,\ldots, z_{2k})= \sum_{j=1}^{2k} z_j^2. \] Let $\gamma$ be a smooth embedded closed curve in $\mathbb{C}$ passing through the origin. The union of the vanishing cycles of $\pi$ along $\gamma$ defines an immersed Lagrangian sphere $W\subset\mathbb{C}^n$ with a single transverse double point, which is one model for the Whitney immersion, see Section \ref{Sec:Intro}. For instance, if $\gamma \cap B_0(\delta) = (-\delta, \delta) \subset \mathbb{R}$, then the vanishing cycles $V_t = \sqrt{t}S^{2k-1} \subset \pi^{-1}(t)$ along $\gamma$ are locally given by \[ \bigcup_{t \in [0,\delta)} V_t \ = \ \mathbb{R}^{2k}; \quad \bigcup_{t\in (-\delta,0]} V_t = i\mathbb{R}^{2k} \] which meet transversely. The two surgeries of $W$ are given by perturbing $\gamma$ to an embedded curve in $\mathbb{C}^*$ which either does or does not enclose the origin. The Maslov $n$ surgery $W_+ = W'$ is Lagrangian isotopic to the Lagrangian $S^{1}\times S^{2k-1}$ in the model Lefschetz fibration, \[ W' \ = \ \bigcup_{t\in S^1} \sqrt{t} S^{2k-1}, \] which is also the mapping torus of the antipodal map on $S^{2k-1}$. \end{Example} \subsection{Homotopy type} In \cite{Damian}, Damian defined the ``lifted Floer homology" of monotone Lagrangian embeddings, which is essentially the Floer homology of the given embedding equipped with a local system defined by pushing forward the trivial local system from the universal cover. He used it to derive constraints on the possible fundamental groups of monotone Lagrangian submanifolds of $\mathbb{C}^n$ of large Maslov number. For the case under consideration here, his work implies the following. \begin{Lemma}\label{Lem:Damian} With notation as in Lemma \ref{Lem:Maslovnumber}, if $k\ge 3$ then $L_+$ is a smooth manifold which fibers smoothly over the circle with fiber a homotopy $(2k-1)$-sphere. \end{Lemma} \begin{proof} Lemma \ref{Lem:Maslovnumber} implies that $L_+$ is a monotone Lagrangian submanifold of $\mathbb{C}^{2k}$ with minimal Maslov number $2k$. Since every compact subset of $\mathbb{C}^{2k}$ is displaceable by Hamiltonian isotopy, \cite[Theorem 7, last two lines]{Damian} gives the result. (Note that existence of a \emph{smooth} fibration is a consequence of the exactness of the Morse-Novikov inequalities, whereby vanishing of Novikov homology implies existence of a non-singular closed 1-form, see \cite{Pazhitnov}.) \end{proof} \begin{Corollary}\label{Cor:LandKdiffeo} The manifold $K$ is diffeomorphic to a homotopy sphere $\Sigma$ of dimension $2k$ and the manifold $L_+$ is diffeomorphic to $(S^{1}\times S^{2k-1})\#\Sigma$. \end{Corollary} \begin{proof} Smoothly, the manifold $L_+$ is obtained by adding a $1$-handle to $K$. To obtain $K$ from $L_+$ we add a $2$-handle that cancels the original $1$-handle. Since $L_+$ fibers over the circle with fiber a homotopy sphere it follows that $K$ is a homotopy sphere. Finally, it is clear that the manifold that results from adding a $1$-handle to $\Sigma$ is $(S^{1}\times S^{2k-1})\#\Sigma$. \end{proof} \begin{Remark} \label{Rem:Cerf} Damian's result \ref{Lem:Damian} does not itself exclude exotic spheres from admitting Lagrangian immersions with a single double point. For $n\geq 6$, Cerf's pseudo-isotopy theorem \cite{Cerf} implies that the group $\Theta_n$ of $h$-cobordism classes of exotic spheres corresponds bijectively to the (orientation-preserving) mapping class group $\pi_0 \textrm{Diff} (S^{n-1})$ of the $(n-1)$-sphere, via the construction of homotopy spheres from clutching functions. One can correspondingly re-interpret the connect sum $\Sigma' = \Sigma \# (S^{1}\times S^{2k-1})$ as the total space of a fiber bundle over the circle, with fiber $S^{2k-1}$ and monodromy the diffeomorphism corresponding to $\Sigma$. \end{Remark} \begin{Remark} \label{Rem:Diffeomorphic} Let $P$ be the manifold $S^1 \times S^{2k-1}$. For any exotic sphere $\Sigma$, the manifolds $T^*P$ and $T^*(P \#\Sigma)$ \emph{are} diffeomorphic. Indeed, both $P$ and $P \# \Sigma$ admit Morse functions with exactly 4 critical points. The difference in their differentiable structures comes from the attaching maps of their top-dimensional handles. A handle decomposition for a cotangent bundle is obtained from a handle decomposition of its base by thickening all handles. In particular, for the cotangent bundles considered here, the attaching $(2k-1)$-spheres for top-dimensional handles have codimension $2k$ in the boundary and are thus smoothly isotopic. \end{Remark} \section{Cobordisms from Cauchy-Riemann problems}\label{Sec:bounding} In this section we construct a parallelizable manifold bounding any homotopy $2k$-sphere $\Sigma$ which admits an exact Lagrangian immersion into $\mathbb{C}^{2k}$ with only one double point, thereby proving Theorem \ref{Thm:Main}. The proof uses moduli spaces of holomorphic disks. In order not to obscure the steps of the construction, we defer the functional analytic arguments needed to establish transversality and gluing results for these moduli spaces to Sections \ref{Sec:basicsetup} -- \ref{Sec:gluing2}. We will use the following notation below. Consider standard coordinates $x+iy=(x_1+iy_1,\dots,x_n+iy_n)$ on $\mathbb{C}^{n}$. We write \begin{equation}\label{Eq:standardnotation} \omega_0=\sum_{j=1}^{n} dx_j\wedge dy_j\quad\text{ and }\quad J_0\colon T\mathbb{C}^{n}\to T\mathbb{C}^{n} \end{equation} for the standard symplectic structure and the standard complex structure (which corresponds to multiplication by the complex unit $i$) on $\mathbb{C}^{n}$, respectively. If $(M, \omega)$ is a symplectic manifold and $H\colon M\times[0,1]\to\mathbb{R}$ is a smooth time-dependent Hamiltonian function then we write $X_H\colon M\times[0,1]\to TM$ for the time-dependent Hamiltonian vector field of $H$, defined by the equation $\omega(X_H, \cdot) = dH_t$, where $H_t\colon M\to M$ denotes the restriction $H|_{M\times\{t\}}$. \subsection{Cauchy-Riemann equations and Hamiltonian displacement}\label{ssec:ham} Let $L \subset \mathbb{C}^n$ be a Lagrangian submanifold with $\pi_1(L)=\mathbb{Z}$ and of minimal Maslov number $n$. We assume that $L$ is real analytic, see Lemma \ref{Lem:reanbdry}. Consider a compactly supported time-dependent Hamiltonian function $H\colon \mathbb{C}^n \times [0,1] \to \mathbb{R}$ and suppose that the following hold. \begin{itemize} \item There exists $\epsilon_0>0$ such that $H_t$ is constant for $t\in [0,\epsilon_0]\cup[1-\epsilon_0,1]$. \item The time $1$ flow $\phi^1$ of the Hamiltonian vector field $X_{H}$ of $H$ displaces $L$ from itself: $\phi^1 (L) \cap L = \varnothing$. \end{itemize} Consider the strip $\mathbb{R}\times[0,1]\subset\mathbb{C}$ with coordinates $s+it$. Fix a smooth family of functions $\alpha_r\colon \mathbb{R}\to[0,1]$, where $r\in[0,\infty)$, with the following properties: \begin{itemize} \item $\alpha_{r}=1$ for $|s|\le r$ and $\alpha_{r}=0$ for $|s|\ge r+1$. \item $\frac{d\alpha_{r}}{ds}\ge 0$ for $s\le 0$ and $\frac{d\alpha_{r}}{ds}\le 0$ for $s\ge 0$. \end{itemize} We also fix a non-decreasing smooth function $\beta\colon [0,\infty)\to[0,1]$ such that $\beta(r)=0$ near $r=0$ and $\beta(r)=1$ for $r\ge 1$. If $D$ denotes the unit disk in $\mathbb{C}$, then there is a unique conformal map \begin{equation} \label{Eqn:xi} \xi\colon \mathbb{R}\times[0,1]\to D \quad \text{with} \quad \xi(\pm \infty)=\pm 1 \ \text{and} \ \xi(0)=-i, \end{equation} with inverse \begin{equation}\label{Eqn:xi-1} \xi^{-1}=s+it\colon D\to\mathbb{R}\times[0,1]. \end{equation} Write $\gamma_r$ for the $1$-form on $D$ such that \[ \xi^{\ast}\gamma_{r}=\beta(r)\alpha_{r}(s)\,dt. \] Fix a small $\delta>0$ and let $\mathcal{J}_L$ denote the space of almost complex structures on $\mathbb{C}^{n}$ which agrees with $J_0$ in a $\delta$-neighborhood of $L$. Associated to $J\in \mathcal{J}_L$ and $r\in [0,\infty)$ is a Floer equation (which is a perturbed Cauchy-Riemann equation): \begin{equation} \label{Eqn:CR1} (du + \gamma_r \otimes X_{H})^{0,1} = 0 \quad \text{for} \ u\in C^{\infty}\left((D^2, \partial D),(\mathbb{C}^n, L)\right). \end{equation} Here the vector field $X_{H}$ depends on $z\in D$, and is given by $X_{H}(u(z),t(z))$, see \eqref{Eqn:xi-1}, and $A^{0,1}$ denotes the $(J, i)$ complex anti-linear part of the linear map $A\colon TD\to T\mathbb{C}^{n}$. Since $H_t$ is constant near $t=0,1$ and since $\gamma_r$ has compact support, it follows that for each $r_0$ there exists $\delta_0$ such that $\gamma_r\otimes X_H=0$ in a $\delta_0$-neighborhood of $\partial D$. Since in addition $J=J_0$ near $L$, \eqref{Eqn:CR1} reduces to the ordinary $\bar\partial$-equation $\bar\partial_{J_0} u=du^{0,1}=0$ in this $\delta_0$-neighborhood. Via the identification $\xi$, the equation in local co-ordinates $s+it\in\mathbb{R}\times[0,1]$ is \begin{equation} \label{Eqn:CR-2} \partial_s u + J \left( \partial_t u - \beta(r)\alpha_r(s) X_H(u(\xi(s,t)), t)\right) = 0, \end{equation} with boundary conditions $u(s,0)\in L$ and $u(s,1)\in L$. Removal of singularities implies that any solution of \eqref{Eqn:CR-2} with finite energy \[ \int_{\mathbb{R} \times [0,1]} (\left| \partial_s u \right|^2+\left| \partial_t u \right|^2)\; ds\wedge dt \ < \infty \] extends smoothly to the disk, hence gives a solution of the Floer equation \eqref{Eqn:CR1}. We call solutions of the Floer equation \emph{Floer holomorphic disks}. Recall that $\pi_1(L)=\mathbb{Z}$ and fix the generator $\beta\in\pi_1(L)$ of Maslov number $+n$. Restriction to the boundary gives an isomorphism $\pi_2(\mathbb{C}^{n},L)\to\pi_1(L)$ and thus homotopy classes of disks $u\colon (D,\partial D)\to(\mathbb{C}^{n},L)$ are indexed by the integers. In particular, we have moduli spaces of Floer holomorphic disks in classes $j\beta$ for any $j\in\mathbb{Z}$. Fix any $k\geq 2$. \begin{Definition} Let $\mathcal{F}(j\beta)$ denote the space of pairs $(u,r)\in C^{m}((D,\partial D),(\mathbb{C}^{n},L))\times[0,\infty)$, $m>0$ of solutions to \eqref{Eqn:CR1} in homotopy class $j\beta$. The fiber of the canonical map $\mathcal{F}(j\beta) \rightarrow [0,\infty)$, $(u,r) \mapsto r$, will be denoted $\mathcal{F}^r(j\beta)$. \end{Definition} Elliptic regularity implies that all solutions to \eqref{Eqn:CR1} are actually smooth. Here we write $C^{m}$ rather than $C^{\infty}$ in order to indicate that $\mathcal{F}(j\beta)$ inherits its topology from a Banach space, see Remark \ref{Rem:normsame}. In Section \ref{Sec:basicsetup} we set up the functional analytic framework for studying moduli spaces of Floer holomorphic disks that we actually use to define $C^{1}$-structures and we refer to there for a more precise treatment. In Lemma \ref{Lem:tvmdli} we recall the well-known proof of the following result: \begin{Proposition} \label{Prop:Fismfld} When $r=0$, every solution to \eqref{Eqn:CR1} is constant, $\mathcal{F}^{0}(0\beta)$ is transversally cut out and canonically $C^{1}$-diffeomorphic to $L$, and when $r\gg0$, the equation has no solutions. Furthermore, for generic $J\in \mathcal{J}_{L}$ and Hamiltonian function $H\colon \mathbb{C}^{n}\times[0,1]\to\mathbb{R}$: \begin{enumerate} \item $\mathcal{F}(0\beta)$ is a $C^{1}$-smooth $(n+1)$-manifold, with boundary $\partial \mathcal{F}(0\beta) = \mathcal{F}^0(0\beta)= L$. \item $\mathcal{F}(-\beta)$ is a $C^{1}$-smooth closed 1-manifold and $\mathcal{F}(j\beta)$ is empty for $j\le -2$. \end{enumerate} \end{Proposition} \begin{Remark}\label{Rem:dimhmtpy} The space of solutions to \eqref{Eqn:CR1} in the relative homotopy class $j\beta \in \pi_2(\mathbb{C}^n, L)$ for \emph{fixed} $r$ has formal dimension \begin{equation} \label{Eqn:Dimension} \dim\left(\mathcal{F}^r(j\beta)\right)= n + \mu(j\beta) = n(1+j), \end{equation} by the Riemann-Roch theorem, where $\mu$ denotes the Maslov index, and adding $+1$ for the parameter $r\in[0,\infty)$ gives the dimensions $\dim(\mathcal{F}(j\beta))=n(1+j)+1$ for the moduli spaces appearing in Proposition \ref{Prop:Fismfld}. \end{Remark} \begin{Remark}\label{Rem:normsame} The $C^{1}$-smooth structure on the moduli spaces, which is inherited from the ambient configuration space (a Banach manifold), is actually independent of the choice of that configuration space: by elliptic regularity, all Sobolev norms in question are equivalent to the $C^{0}$-norm on the space of solutions. \end{Remark} \subsection{Broken disks} The manifold $\mathcal{F}(0\beta)$ of Floer holomorphic disks is necessarily non-compact since the $\mathbb{Z}_{2}$-degree of the boundary evaluation map into $L$, restricted to $\partial \mathcal{F}(0\beta)=\mathcal{F}^{0}(0\beta)$, equals $1$. However, it admits a canonical compactification, due to Gromov \cite{Gromov} and Floer \cite{Floer:lagrangian}, which we shall use to obtain a bounding manifold for $L$. The Gromov-Floer compactification \[ \overline{\mathcal{F}}(0\beta) \ = \ \mathcal{F}(0\beta) \cup \mathcal{F}^{\mathrm{bd}}(0\beta) \] is obtained by adding broken disks as solutions to the equations (\ref{Eqn:CR1}). To describe a broken disk we first introduce the notion of a \emph{holomorphic tree} (sometimes called a bubble tree), which is a finite rooted tree $\Gamma$ such that: to each vertex $v\in\Gamma$ corresponds a $J_0$-holomorphic disk $u_{v}\colon (D,\partial D)\to(\mathbb{C}^{n},L)$; to the edges $e\in\Gamma$ adjacent to $v$ correspond pairwise distinct boundary marked points $\zeta_{e}\in\partial D$ in the domain of $u_v$; and there is one additional marked point on the boundary of the source disk $D$ of the root (distinct from all other marked points on $\partial D$), which we call the \emph{root point}. We require that each holomorphic disk is stable (i.e.~either non-constant or constant with at least three distinct boundary marked points), and, furthermore, that if $v$ and $w$ are vertices of $\Gamma$ connected by an edge $e$ then $u_{v}(\zeta_v)=u_{w}(\zeta_w)$. We define the homotopy class of a holomorphic tree as the sum of the homotopy classes of all its vertex disks. A broken disk in $\mathcal{F}^{\mathrm{bd}}(0\beta)$ then comprises \begin{itemize} \item a solution $u_0$ to the equations \eqref{Eqn:CR1} in a homotopy class $\beta_0$ with pairwise distinct marked points $\zeta_1,\dots,\zeta_\nu$; and \item a collection of holomorphic trees $\Gamma_i$, $1\leq i\leq \nu$, such that $u_0(\zeta_i)=\operatorname{ev}_{\mathrm{root}}(\Gamma_i)$, where $\operatorname{ev}_{\mathrm{root}}$ denotes evaluation at the root point, in homotopy classes $\beta_i$. \end{itemize} Here the homotopy classes are subject to the constraint that \begin{equation}\label{Eqn:BubbleSum} \beta_0 + \sum_{j=1}^{\nu} \beta_j \ = \ 0. \end{equation} Critically, in our situation there is a unique possible configuration of such broken solutions. To give a precise statement, we introduce a further notation. Note that if $H_R\equiv 0$, the equations \eqref{Eqn:CR1} reduces to the usual Cauchy-Riemann equation \begin{equation} \label{Eqn:UsualCR} du^{0,1} = 0 \quad \text{for } \ u\colon (D,\partial D) \to (\mathbb{C}^n, L) \end{equation} for the given almost complex structure $J\in\mathcal{J}_L$. \begin{Definition} Let $\mathcal{M}(j\beta)$ denote the quotient space of solutions $u$ to \eqref{Eqn:UsualCR} in class $j\beta$, modulo the group $G=PSL_2(\mathbb{C})$ of conformal automorphisms of the disk. Furthermore, we define \begin{enumerate} \item $\mathcal{F}^*(j\beta)$ as the space of solutions to \eqref{Eqn:CR1} where the disk has a single boundary marked point; \item $\mathcal{M}^*(j\beta)$ as the space of solutions to \eqref{Eqn:UsualCR} where the disk has a single boundary marked point. \end{enumerate} \end{Definition} The space $\mathcal{F}^*(k\beta) \cong \mathcal{F}(k\beta) \times \partial D$ is diffeomorphic to a product (see Remark \ref{Rem:dimhmtpy} for the induced topology), whilst there is a natural forgetful map $\mathcal{M}^*(j\beta) \rightarrow \mathcal{M}(j\beta)$ which is a fiber bundle with fiber $G/G_1 \cong S^1$ isomorphic to the group of rotations, where $G_{1}$ is the subgroup of automorphisms that fix the point $1\in\partial D$. Again, Section \ref{Sec:basicsetup} recalls the standard analytic framework for studying these moduli spaces. \begin{Proposition} \label{Prop:Mismfld} After arbitrarily small perturbation of $J\in \mathcal{J}_{L}$ or arbitrarily small real analytic Hamiltonian isotopy of $L$, one has: \begin{enumerate} \item $\mathcal{M}(j\beta)$ is empty for $j<0$; \item $\mathcal{M}(0\beta)$ is the set of constant maps; \item $\mathcal{M}(\beta)$ is a transversely cut out closed manifold of dimension $2n-3$. \end{enumerate} \end{Proposition} That one can achieve transversality by perturbing $L$ but leaving $J=J_0$ fixed for disks in the primitive homology class $\beta$ is a theorem of Oh \cite{Oh:Perturb}. The other statements are standard, cf.~Lemma \ref{Lem:tvmdli}. The smooth structure on $\mathcal{M}(\beta)$ is inherited from the ambient Banach configuration space as follows. Fix a parameterization $u\colon D\to\mathbb{C}^{n}$ of an element in $\mathcal{M}(\beta)$. Since $u|_{\partial D}$ is non-constant the derivative is non-zero at some point in the boundary. Fixing three disjoint real codimension one hyperplanes in $L \subset \mathbb{C}^n$ near this point gives three marked points on the boundary of any disk in a neighborhood of $u$ that we use to fix parameterization and thereby construct local $C^{1}$-chart on $\mathcal{M}(\beta)$ near $u$. Since $\mathcal{M}(\beta)$ is compact it has a finite cover of charts as just described. It is straightforward to check that transition functions between charts are smooth, thus giving a $C^{1}$-structure on $\mathcal{M}(\beta)$, and that any two finite covers give rise to the same $C^{1}$-structure; see Section \ref{Sec:BasicStructure} for details. The moduli spaces $\mathcal{M}^*(\beta)$ and $\mathcal{F}^*(k\beta)$ come with canonical evaluation maps to $L$, which we denote by $\operatorname{ev}$. Again, the following is standard, cf.~Section \ref{Sec:basicsetup} and Lemma \ref{Lem:fiberprod1}: \begin{Proposition} \label{Prop:FibreProduct} For generic $J\in\mathcal{J}_L$ and $H\in C^{\infty}(\mathbb{C}^{n}\times[0,1],\mathbb{R})$, the product of the evaluation maps \[ \operatorname{ev}\colon\mathcal{M}^*(\beta) \rightarrow L \quad \text{and} \quad \operatorname{ev}\colon \mathcal{F}^*(-\beta) \rightarrow L, \] $\operatorname{ev}\times\operatorname{ev}$, is transverse to the diagonal $\Delta_L\subset L\times L$. Hence, the Gromov-Floer boundary $\mathcal{F}^{\mathrm{bd}}(0\beta)$ is a closed $n$-manifold $C^1$-diffeomorphic to the fiber product of these maps: \[ \mathcal{F}^{\mathrm{bd}}(0\beta) \ = \ \mathcal{F}^*(-\beta) \times_{L} \mathcal{M}^*(\beta) \ = \ (\operatorname{ev}\times\operatorname{ev})^{-1}(\Delta_{L}). \] \end{Proposition} Most of Sections \ref{Sec:theboundary} and \ref{Sec:gluing2}, culminating in Theorem \ref{Thm:gluing}, is devoted to proving the following strengthening of the preceding result. \begin{Theorem} For generic $J\in\mathcal{J}_L$ and $H\in C^{\infty}(\mathbb{C}^{n}\times[0,1],\mathbb{R})$, there is a $C^{1}$-embedding \[ \Psi\colon \mathcal{F}^{\mathrm{bd}}(0\beta)\times[0,\infty)\to\mathcal{F}(0\beta) \] with image in the interior of $\mathcal{F}(0\beta)$ and such that $\mathcal{F}(0\beta)\backslash\Psi\left(\mathcal{F}^{\mathrm{bd}}(0\beta)\times(0,\infty)\right)$ is a compact manifold with boundary $L\sqcup \Psi(\mathcal{F}^{\mathrm{bd}}(0\beta)\times\{0\})$. Consequently, the compactified moduli space \[ \overline{\mathcal{F}}(0\beta) \ = \ \mathcal{F}(0\beta) \cup \mathcal{F}^{\mathrm{bd}}(0\beta) \] admits the structure of a $C^1$-smooth manifold with boundary, whose boundary is diffeomorphic to the disjoint union \[ \partial \, \overline{\mathcal{F}}(0\beta) \ = \ L \sqcup \mathcal{F}^{\mathrm{bd}}(0\beta). \] \end{Theorem} \subsection{Capping the outer boundary} \label{ssec:capping} To obtain a compact manifold with unique boundary component $L$, we will fill the outer boundary of $\overline{\mathcal{F}}(0\beta)$. We adapt an ingenious trick from \cite[Section 2c]{Abouzaid}. The outer boundary is diffeomorphic, via a gluing map, to a fiber product \[ \mathcal{N} = \mathcal{F}^{\mathrm{bd}}(0\beta) \, = \, \mathcal{F}^*(-\beta) \times_L \mathcal{M}^*(\beta). \] Whilst the second factor $\mathcal{M}^*(\beta)$ is an unknown orientable $(2n-2)$-manifold, the first factor is a finite union of 2-tori. Let $\mathcal{D}$ be a finite union of solid tori with boundary $\partial\mathcal{D}=\mathcal{F}^{\ast}(\beta)$. Since $H_1(L;\mathbb{Z})\approx \mathbb{Z}$ and $H_2(L;\mathbb{Z}) = 0$, the map $\operatorname{ev}\colon\partial \mathcal{D}\to L$ extends to a map \[ \overline{\operatorname{ev}}\colon\mathcal{D}\to L. \] \begin{Lemma} After arbitrarily small perturbation of $J\in\mathcal{J}_L$ and Hamiltonian $H$, the fiber product \[ \mathcal{T} \ = \ \mathcal{D} \times_L \mathcal{M}^*(\beta) \ = \ (\overline{\operatorname{ev}}\times\operatorname{ev})^{-1}(\Delta_L) \] is transverse, and $\mathcal{T}$ is a compact $C^1$-smooth manifold with boundary diffeomorphic to $\mathcal{N}$. \end{Lemma} A proof is given in Lemma \ref{Lem:fiberprod2}. Note that, since we perturb $J \in \mathcal{J}_L$, the boundary $\partial\mathcal{T}$ is not strictly equal to the initially given $\mathcal{N}$: since the moduli spaces are slightly deformed the boundary which is the fibered product is deformed as well. However, the perturbation can be made arbitrarily small and hence there is an arbitrarily small diffeomorphism from the initial $\mathcal{N}$ to the version of $\mathcal{N}$ after perturbation. This gives in particular the following. \begin{Corollary} The space $\mathcal{B} = \overline{\mathcal{F}}(0\beta) \cup_{\mathcal{N}} \mathcal{T}$ is a compact $C^1$-smooth manifold with $\partial\mathcal{B}=L$. \end{Corollary} \subsection{Tangent and index bundles of moduli spaces}\label{ssec:tangentinfo} For spaces of (Floer) holomorphic disks which are transversely cut out, the tangent bundle is described by index theory. In Section \ref{sec:closeddisks} we introduce a configuration space $\mathcal{X}$ of disks with boundary in $L$ modeled on a Sobolev space of maps $D\to\mathbb{C}^{n}$ with two derivatives in $L^{2}$, and view the Floer operator as a map \[ \bar\partial_{\rm F}\colon\mathcal{X}\times[0,\infty)\to\mathcal{Y}, \] where $\mathcal{Y}$ is the subspace of the Sobolev space of complex anti-linear maps $TD\to\mathbb{C}^{n}$ with one derivative in $L^{2}$ comprising elements that vanish along the boundary, and where the coordinate in $[0,\infty)$ parameterizes the Hamiltonian term of the Floer operator. We write $\mathcal{X}(j\beta)$ for the component of $\mathcal{X}$ containing maps that represent the homotopy class $j\beta$ when restricted to the boundary. Let $D(\bar\partial_{\rm F})\colon T\mathcal{X}\to\mathcal{Y}$ denote the linearization of $\bar\partial_{\rm F}$ and write \[ \left[ T\mathcal{X}(j\beta) \xrightarrow{D(\bar\partial_{\rm F})}\mathcal{Y} \right] \] for the index bundle of $D(\bar\partial_{\rm F})$ over $\mathcal{X}(j\beta)$. Similarly, we write $\bar\partial$ for the Cauchy-Riemann operator without Hamiltonian term and $D(\bar\partial)$ for its linearization. \begin{Lemma} \label{Lem:Index} For a generic $J\in\mathcal{J}_{L}$ and Hamiltonian $H$, there are the following identities: \begin{enumerate} \item in $KO(\mathcal{F}(0\beta))$, \[ T\mathcal{F}(0\beta)\ \simeq \ \left.\left[ T\mathcal{X}(0\beta) \xrightarrow{D(\bar\partial_{\rm F})}\mathcal{Y} \right] \right|_{\mathcal{F}(0\beta)}, \] \item in $KO(\mathcal{F}^{\ast}(-\beta))$, \[ T\mathcal{F}^{\ast}(-\beta)\ \simeq \ \pi^{\ast}\left.\left[ T\mathcal{X}(-\beta) \xrightarrow{D(\bar\partial_{\rm F})}\mathcal{Y} \right] \right|_{\mathcal{F}(-\beta)}, \] where $\pi\colon\mathcal{F}^{\ast}(-\beta)\to\mathcal{F}(-\beta)$ is the projection that forgets the marked point. \item in $KO(\mathcal{M}^{\ast}(\beta))$, \[ T\mathcal{M}^{\ast}(\beta)\ \simeq \ \left.\left[ T\mathcal{X}(\beta) \xrightarrow{D(\bar\partial)}\mathcal{Y} \right] \right|_{\mathcal{M}^{\ast}(\beta)}, \] where we embed $\mathcal{M}^{\ast}(\beta)\subset \mathcal{X}(\beta)$ by choosing a smooth section of the bundle over $\mathcal{M}^{\ast}(\beta)$ whose fiber is the (contractible) automorphism group $G_1$ of the given holomorphic disk. \end{enumerate} \end{Lemma} \begin{proof} The identification of the $K$-theory class of the tangent bundle with the restriction of the corresponding linearized operator from the ambient space of smooth maps is a general feature of elliptic problems, cf.~for instance McDuff \cite[Proposition 4.3]{McDuff:K-class} for the Cauchy-Riemann case. \end{proof} The moduli spaces we work with all consists of smooth maps and we will therefore often consider the restriction of index bundles from the configuration space to the space $\mathcal{X}_{\rm sm}$ of $C^{m}$ maps for some large integer $m$. \subsection{Index bundles and a piecewise linear isoptopy} Assume that the dimension $n=\dim(L)$ is even, $n=2k>4$. As a first step toward proving that the tangent bundle of $\mathcal{B}$ is trivial we show that the index bundles in Lemma \ref{Lem:Index} can be understood through the corresponding bundles for the Lagrangian $S^{1}\times S^{2k-1}$, $W'\subset\mathbb{C}^{2k}$ which results from Lagrange surgery on the Whitney sphere $W$. \begin{Lemma} \label{Lem:StablyTrivial} The tangent bundle $TL$ of $L$ is stably trivial. \end{Lemma} \begin{proof} The manifold $L$ is the connect sum of a homotopy $2k$-sphere and the product $S^{1}\times S^{2k-1}$, since each of the factors is stably parallelizable, see \cite[Theorem 3.1]{KM}, so is their connected sum. \end{proof} Any Lagrangian immersion $f\colon L \rightarrow \mathbb{C}^n$ defines a Gauss map to the Lagrangian Grassmannian $U(n)/O(n)$ that takes $q\in L$ to $df(T_qL)\in U(n)/O(n)$. Stabilizing, one obtains a stable Gauss map \begin{equation} \label{Eqn:Gauss} G_f\colon L \rightarrow U/O, \end{equation} if $f$ is the inclusion map we sometimes write $G_L$ instead of $G_f$. \begin{Lemma} \label{Lem:GaussInterpolate} There is a one-parameter family of $C^0$-homeomorphisms $\phi_t\colon \mathbb{C}^{2k} \rightarrow \mathbb{C}^{2k}$ and a 1-parameter family of continuous maps $\psi_t\colon S^1 \times S^{2k-1} \rightarrow U/O$, with the properties that \begin{enumerate} \item $\phi_0$ is the identity; \item $\phi_1(L) = W'$; \item $\psi_0=G_L$ and $\psi_1 = G_{W'} \circ \phi_1$. \end{enumerate} \end{Lemma} \begin{proof} Any homotopy $n$-sphere $\Sigma$, $n> 4$, admits a Morse function with exactly two critical points \cite{KM}, and hence admits the structure of a {\sc pl}-manifold {\sc pl}-homeomorphic to the standard sphere $S^n$. Consider the Lagrangian immersion $f\colon K\to\mathbb{C}^{2k}$ and the Whitney sphere $w\colon S^{2k}\to\mathbb{C}^{2k}$ as {\sc pl}-immersions. After an initial Hamiltonian isotopy of $\mathbb{C}^{2k}$ we may assume that $f$ and $w$ agree near their unique double points. Let $B\subset\mathbb{C}^{n}$ be a small ball around the double point and let $\tilde B\subset S^{2k}$ denote $f^{-1}(B)=w^{-1}(B)$. Since the codimension of the immersions is $2k>3$ and $\mathbb{C}^{2k}-B$ is $4k-2$ connected, it follows from \cite{Hudson} that the {\sc pl}-embeddings $f,w\colon (S^{2k}-\tilde B,\partial \tilde B)\to (\mathbb{C}^{2k}-B,\partial B)$ are isotopic rel boundary through {\sc pl}-embeddings. The combinatorial isotopy extension theorem of \cite{HudsonZeeman} then implies that there is a global continuous $1$-parameter family of {\sc pl}-homeomorphism $\phi_t\colon\mathbb{C}^{2k} \rightarrow \mathbb{C}^{2k}$, $0\le t\le 1$, with $\phi_0=\mathrm{id}$ and $\phi_1\circ f = w$. Consider now the stable Gauss maps $G_{f}$ and $G_{\phi_1\circ f}$ mapping a $2k$-sphere into $U/O$ and the associated stable tangent map of the obvious product Lagrangian immersion $f\times \iota\colon K \times \mathbb{R} \rightarrow \mathbb{C}^{2k} \times \mathbb{C}$, where $\iota\colon\mathbb{R}\to\mathbb{C}$ is the inclusion. Since the tangent bundle of the domain restricts as a globally trivial bundle to $K \times \{0\}$, the stabilized Gauss map, initially defined as a map of $K$ into $U/O$, lifts to a map $K\to U$. The stable homotopy groups of $U$ vanish in all even dimensions and we conclude that $G_{f}$ and $G_{\phi_1\circ f}$ are homotopic. Since $L$ and $W'$ are obtained by Lagrange surgery on $f$ and $w$, respectively, which is a local operation near the double point the lemma clearly follows from the above. \end{proof} Lemma \ref{Lem:GaussInterpolate} shows that to understand the index bundle over the space $\mathcal{X}_{\rm sm}(L)$ of smooth disks with boundary on $L$, it will be sufficient to understand the corresponding index bundle over $\mathcal{X}_{\rm sm}(W')$. In Section \ref{Sec:IndexBundle} we study this bundle explicitly using the fact that $\mathcal{X}_{\rm sm}(W')$ is homotopy equivalent to the free loop space of $W'$, and working with a skeleton of the loop space obtained from the Morse-Bott energy functional of the round metric on $S^{2k-1}$. Our main results are summarized in the following lemma: \begin{Lemma}\label{Lem:summarytrivindex} The index bundle \[ \left[ T\mathcal{X}(j\beta) \xrightarrow{D(\bar\partial)}\mathcal{Y} \right] \] is trivial over the $(6k-7)$-skeleton of $\mathcal{X}_{\rm sm}(W')$. \end{Lemma} Lemmas \ref{Lem:Index}, \ref{Lem:GaussInterpolate}, and \ref{Lem:summarytrivindex} imply that a choice of a homotopy class of stable trivialization of the index bundle over the $(6k-7)$-skeleton of $\mathcal{X}_{\rm sm}(W')$ induces homotopy classes of stable trivializations of the tangent bundles to each of $\mathcal{F}(0\beta)$, $\mathcal{F}^{\ast}(-\beta)$, and $\mathcal{M}^{\ast}(\beta)$. In the next section we describe how these trivializations interact near the Floer-Gromov boundary of $\mathcal{F}(0\beta)$. \subsection{Coherent trivialization} \label{sec:cohertriv} Let $\zeta$ be a boundary puncture on $\partial D$. Consider the conformal map $\psi_{\zeta}\colon D-\zeta\to H$, where $H=\{w=u+iv\colon v\ge 0\}$ is the upper half plane, \[ \psi_{\zeta}(z)=-i\;\frac{\zeta^{-1}z+1}{\zeta^{-1}z-1}, \] which takes $\zeta$ to $\infty$, $i\zeta$ to $-1$, and $-i\zeta$ to $1$. For $R>0$, consider the conformal map \[ \kappa_R\colon [0,\infty)\times[0,1]\to H,\quad \kappa_R(\tau+it)= R e^{\pi(\tau+it)}. \] Let $E(R)\subset H$ be its image. Let $h$ be a metric on $H$ which agrees with the metric $du^{2}+dv^{2}$ in the disk of radius $2$, which agrees with the metric $d\tau^{2}+dt^{2}$ in $\tau+it$ coordinates on $E(3)$, and which interpolates smoothly between the two in the annular region $E(2)-E(3)$. Thus $h$ is a metric on $H\approx D-\zeta$ in which the boundary puncture has a neighborhood which is a strip like end. For $\eta\in\mathbb{R}$, let $\mathbf{e}_{\eta}\colon H\to\mathbb{R}$ be the function \begin{equation} \mathbf{e}_{\eta}(w)= \begin{cases} 1 &\text{ for }z\in H-E(3),\\ e^{\eta|\tau|} &\text{for }w=\tau+it\in[0,\infty)\times[0,1]\approx E(3). \end{cases} \end{equation} In Section \ref{sec:punctures}, we construct configuration spaces for punctured (Floer) holomorphic disks using weight functions as above with $\eta=\delta\in(0,\pi)$. The configuration spaces are now modeled on the direct sum of a weighted Sobolev space with two derivatives in $L^{2}$ and a finite dimensional space of cut-off constant solutions supported near $\infty$. We denote the corresponding configuration space $\mathcal{X}_{\delta}$ and the corresponding Sobolev space of complex anti-linear maps which is the target for the $\bar\partial$-operator $\mathcal{Y}_{\delta}$. The $C^{1}$-structures on moduli spaces induced from $\mathcal{X}_{\delta}$ and $\mathcal{X}$ are canonically diffeomorphic. In this setting we introduce a linear gluing operation as follows. Recall that $L$ is assumed real analytic and that $J$ is standard near $L$. There is a natural map $\operatorname{ev}\colon \mathcal{X}_{\delta}\to L$ which is given by evaluation at the puncture. Let $\mathcal{X}_{\delta}(j\beta)$ denote the component of the space $\mathcal{X}_{\delta}$ with boundary in homotopy class $j\beta$. Consider two maps $u_1\in\mathcal{X}_{\delta}(j_1\beta)$ and $u_2\in\mathcal{X}_{\delta}(j_2\beta)$ with $\operatorname{ev}(u_{1})=\operatorname{ev}(u_{2})=q\in L$. Then there is $\rho_0>0$ such that for $\rho>\rho_0$, $[\rho_0,\infty)\times[0,1]$ is mapped to a standard coordinate neighborhood of $q$. Write $H_{j}\approx H$, $j=1,2$ for the domain of $u_j$, $j=1,2$. Here we think of the strip neighborhood of $\infty$ in $H_2$ as $[0,\infty)\times[0,1]$, as usual. For simpler formulas we think of the neighborhood of $\infty$ in $H_1$ (where $\infty\in H_1$ corresponds to the puncture $\zeta\in\partial D$) as $(-\infty,0]\times[0,1]$. Note that for both domains the weight function looks like $e^{\delta|\tau|}$, $\tau+it\in[0,\infty)\times[0,1]$ for $H_2$ and $\tau+it\in(-\infty,0]\times[0,1]$ for $H_2$. Let $\rho\ge 2$ and write \begin{align*} H_{1;\rho} &= H_1-((-\infty,-\rho)\times[0,1]),\\ H_{2;\rho} &= H_2-((\rho,\infty)\times[0,1]). \end{align*} Define \begin{equation}\label{Eq:H_1+H_2} H_{1}\#_{\rho} H_{2}= (H_{1;\rho}\cup H_{2;\rho})/\sim, \end{equation} where \[ \rho+it\;\sim\;-\rho+it,\quad\quad -\rho+it\in H_1\text{ and }\rho+it\in H_2. \] Then $H_{1}\#_{\rho} H_{2}$ comes equipped with a metric that agrees with the given metrics on $H_{1;\rho}$ and $H_{2,\rho}$ (which are the standard Euclidean metrics on the strip neighborhoods of $\infty$). Furthermore, $H_1\#_{\rho} H_2$ contains the finite strip region in the middle, \begin{equation}\label{Eq:midstrip} [-\rho,\rho]\times[0,1]\approx [0,\rho]\times[0,1]\cup[-\rho,0]\times[0,1]\subset H_1\#_\rho H_2. \end{equation} Here the weight functions of $H_{1;\rho}$ and $H_{2;\rho}$ fit together to a weight function on $H\#_{\rho} H_2$ given by $\tau+it\mapsto e^{\delta(|\rho|-|\tau|)}$ for $\tau+it\in[-\rho,\rho]\times[0,1]$, i.e.~in the middle strip. There is a preglued map $u_{1}\#_{\rho}u_{2}\colon H_{1}\#_{\rho} H_2 \rightarrow \mathbb{C}^n$, which agrees with $u_j$ on $H_{j;\rho-1}$ and interpolates between the two maps in the remaining square inside the standard neighborhood of $q$. Furthermore, both the Lagrangian boundary condition for the linearized operators $D(\bar\partial_{j})$, $j=1,2$ of the two disks and the operators themselves match where they are glued and hence induce an operator $D(\bar\partial_{\rho})$ and a Lagrangian boundary condition along $H_{1}\#_{\rho}H_{2}$. We will view this linearized operator as acting on the direct sum of a Sobolev space of $\mathbb{C}^{n}$-valued functions with two derivatives in $L^{2}$ weighted by $\mathbf{e}_{\delta}$ and which vanish at the point $0\in[-\rho,\rho]\times[0,1]$, and a $2k$-dimensional space of cut-off constant solutions $V_{\rm sol}$ supported in $[-\rho,\rho]\times[0,1]$. It is straightforward to check that \[ \index(D(\bar\partial_{\rho}))=\index(D(\bar\partial_{1}))+\index(D(\bar\partial_{2}))-2k. \] Since we are interested in index bundles we consider stabilizations of the linearized operators. More precisely, let $D(\bar\partial)\oplus \mathbb{R}^{N}$ denote the operator $D(\bar\partial)$ augmented by a map $\psi\colon\mathbb{R}^{N}\to \mathcal{Y}_{\delta}$, where $\psi(v)$ has compact support in the interior of $H$ for any $v$. Let $u_1\in\mathcal{X}_{\delta}(-\beta)$ and $u_2\in\mathcal{X}_{\delta}(\beta)$ and consider stabilizations $D(\bar\partial_{1})\oplus\mathbb{R}^{N_1}$ and $D(\bar\partial_{2})\oplus\mathbb{R}^{N_2}$ that are surjective. Then for $\rho$ sufficiently large, since both $\psi_j\colon \mathbb{R}^{N_j}\to\mathcal{Y}_{\delta}$ map to elements with compact support in the interior of $H_j$, $j=1,2$, we get an induced stabilization $D(\bar\partial_{\rho})\oplus\mathbb{R}^{N_1}\oplus\mathbb{R}^{N_2}$. Consider the further stabilization $D(\bar\partial_{\rho})\oplus\mathbb{R}^{N_1}\oplus\mathbb{R}^{N_2}\oplus\mathbb{R}^{2k}$ obtained by adding an extra copy of $V_{\rm sol}\approx \mathbb{R}^{2k}$ to $D(\bar\partial_{\rho})\oplus\mathbb{R}^{N_1}\oplus\mathbb{R}^{N_2}$. In Section \ref{Sec:LinearGluing} we establish the following result. \begin{Lemma}\label{Lem:L2project} For all $\rho$ that are sufficiently large, $L^{2}$-projection gives an isomorphism \[ \ker(D(\bar\partial_{\rho})\oplus\mathbb{R}^{N_1}\oplus\mathbb{R}^{N_2}\oplus\mathbb{R}^{2k})= \ker(D(\bar\partial_{1})\oplus\mathbb{R}^{N_1})\oplus \ker(D(\bar\partial_{2})\oplus\mathbb{R}^{N_2}). \] \end{Lemma} Lemma \ref{Lem:L2project} implies that stable trivializations of index bundles on the two pieces of a family of broken disks induce a stable trivialization of the index bundle after pregluing. We formalize this with a notion of \emph{coherence} of trivializations. Write $\mathbf{I}_{j\beta}^{\ast}$ for the index bundle over $\mathcal{X}_{\delta}(j\beta)$ and $\mathbf{I}_{j\beta}$ for its restriction to the subspace $\mathcal{X}_{\delta}(1;j\beta)$ of disks with puncture at $1\in\partial D$. Assume that, over sufficiently large compact subsets of the configuration spaces $\mathcal{X}_{\delta}$, we have fixed stabilizations $Z_{-\beta}$ of $\mathbf{I}^{\ast}_{-\beta}$, $Z_{\beta}$ of $\mathbf{I}_{\beta}$, and $Z_{0\beta}$ of $\mathbf{I}_{0\beta}$ with properties as described before Lemma \ref{Lem:L2project}, so in particular the corresponding families of operators are everywhere surjective. Consider a compact CW-complex $Q$ with maps $p\colon Q\to \mathcal{X}_{\delta}(-\beta)$ and $q\colon Q\to\mathcal{X}_{\delta}(1;\beta)$ such that $\operatorname{ev}\circ \, p=\operatorname{ev}_1\circ \, q$. Assume furthermore that the map $p$ factors as follows: \begin{equation} \label{Eqn:Qfactors} \begin{CD} Q @>{p'}>> A\times S^{1} @>{a\times\mathrm{id}}>> \mathcal{X}_{\delta}(-\beta)=\mathcal{X}_{\delta}(1;-\beta)\times\partial D \end{CD}, \end{equation} where $A$ is a compact CW-complex. Write $\operatorname{PG}\colon Q\times[\rho_0,\infty)\to\mathcal{X}_{\delta}(0\beta)$ for the map that applies pregluing to the pair $(p,q)$ and consider the pull-back bundle $\operatorname{PG}^{\ast}\mathbf{I}_{0\beta}$ over $Q$. By Lemma \ref{Lem:L2project} we find that there are two stable trivializations of this bundle, namely those given by \[ p^{\ast}Z_{-1}^{\ast}\oplus q^{\ast}Z_{1} \quad \textrm{and} \quad \operatorname{PG}^{\ast}(Z_{0}\oplus Z_{TL}), \] where $Z_{TL}$ is a fixed trivialization of $TL$. \begin{Definition}\label{Def:CoherTriv} The triple of stable trivializations $Z_{-1},Z_{1}, Z_{0}$ is \emph{$(d',d)$-coherent} if the two stable trivializations $p^{\ast}Z_{-1}^{\ast}\oplus q^{\ast}Z_{1}$ and $\operatorname{PG}^{\ast}(Z_{0}\oplus Z_{TL})$ are homotopic for all $Q$ and $A$ as above with $\dim(A)\le d'$ and $\dim(Q)\le d$. \end{Definition} The rather roundabout nature of Definition \ref{Def:CoherTriv} has avoided needing to talk about transversality of fiber products. The key result, proved in Section \ref{Sec:IndexBundle}, is then the following. As always, by a stable trivialization of a given index bundle we mean one over some sufficiently large compact subset. \begin{Lemma} \label{Lem:CoherTrivExist} Let $d\leq 6k-7$. For any stable trivialization $Z_{-1}^{\ast}$ of $\mathbf{I}^{\ast}_{-\beta}$, there are stable trivializations $Z_{1}$ of $\mathbf{I}_{\beta}$ and $Z_{0}$ of $\mathbf{I}_{0\beta}$ such that $(Z_{-1}^{\ast},Z_{1},Z_{0})$ is $(1,d)$-coherent. \end{Lemma} \begin{Remark} Coherent trivializations refine the usual idea of ``coherent orientations", but involving framings of the index bundles rather than of their associated determinant lines. The axioms satisfied by Givental's quantum $K$-theory, cf.~\cite[Proposition 11]{Lee}, also rely on a version of Lemma \ref{Lem:L2project}, to ensure that the virtual Euler characteristics of the 2-term perfect obstruction theories over moduli spaces of stable maps are compatible under degeneration. However, in the situations studied by Givental and Lee, the relevant index-type bundles are not stably trivial. \end{Remark} \subsection{Proofs of the main results} We prepare for the proofs by presenting two general lemmas. Let $\cong_{\mathrm{s}}$ denote stable isomorphism of vector bundles $E$ and $F$ over a space $X$, so $E \cong_{\mathrm{s}} F$ if and only if the bundles $E$ and $F$ define the same class in $\widetilde{KO}(X)$. Let $\Sigma X$ denote the suspension of $X$, so $\Sigma X = S^1 \wedge X$. \begin{Lemma} \label{Lem:KOobs} Suppose $X = U \cup V$ is a decomposition of a cell complex into two subcomplexes. Suppose $E, F\rightarrow X$ are vector bundles with the property that $E|_U \cong_\mathrm{s} F|_U$ and $E|_V \cong_\mathrm{s} F|_V$. Then the obstruction to stable isomorphism of $E$ and $F$ over $X$ is a class in the $\widetilde{KO}$-theory of the suspension $\Sigma(U\cap V)$. \end{Lemma} \begin{proof} Immediate from the Mayer-Vietoris sequence in $\widetilde{KO}$. \end{proof} \begin{Lemma} \label{Lem:KO} $\widetilde{KO}(\Sigma T^2) = \mathbb{Z}_2 \oplus \mathbb{Z}_2$, with the summands detected by the $Spin$ structure on the circle factors of $T^2$. \end{Lemma} \begin{proof} The suspension $\Sigma T^2$ contains a wedge of 2-spheres, from suspending the 1-skeleton of the torus, and a 3-sphere, from suspending the 2-cell of $T^2$. The result then follows from the Mayer-Vietoris theorem in $\widetilde{KO}$-theory. \end{proof} \begin{proof}[Proof of Theorem \ref{Thm:Main}] Consider the (stable) tangent bundle of $\mathcal{B}$ restricted to the pieces of $\mathcal{B}$: \begin{enumerate} \item On $\overline{\mathcal{F}}(0\beta)$, $T\mathcal{B} = \mathbf{I}_{0\beta}$. \item In the collar on the boundary $\mathcal{N} \times [0,\infty)$ the tangent bundle is stably given by $\mathbf{I}_\beta + \mathbf{I}^*_{-\beta} - TL$. \item Over the filling $\mathcal{T}$, a fiber product over the union of solid tori $\mathcal{D}$, the tangent bundle is stably given by $T\mathcal{D} + \mathbf{I}_\beta - TL$. \end{enumerate} We compare the stably trivialized virtual bundles on the overlap of $(2)$ and $(3)$: \[ \mathbf{I}_\beta + \mathbf{I}^*_{-\beta} - TL \quad \textrm{and} \quad T\mathcal{D} + \mathbf{I}_\beta - TL \] near the boundary of the fiber product \[ \mathcal{D} \times_L \mathcal{M}^*(\beta). \] We fix trivializations of the summands $\mathbf{I}_\beta$ and $TL$, and regard them as being extended trivially over $\mathcal{D}$. The remaining summands $\mathbf{I}^*_{-\beta}$ and $T\mathcal{D}$ are both pulled back from a 2-dimensional torus inside $\mathcal{N}$, which brings us into the situation of Lemma \ref{Lem:KOobs} with $U \cap V \approx T^2$. More explicitly, the $1$-skeleton of $\mathcal{X}_{\rm sm}(-\beta)$ corresponds to the non-trivial loop in $L$, and the preimage of this 1-skeleton defines a torus $T^2_{-\beta}$ in $\mathcal{X}_{\rm sm}(-\beta)\times\partial D$. Under evaluation into $L$, and in a suitable homology basis, exactly one of the two circle factors of this torus bounds. We pick a trivialization of the bundle $\mathbf{I}^*_{-\beta}$ corresponding to the bounding spin structure on the bounding loop of $T^2_{-\beta}$ and which is arbitrary on the non-bounding loop of $T^2_{-\beta}$. This trivialization gives a stable trivialization of $T\mathcal{D}$ near the boundary of $\mathcal{D}$, which we claim extends to all of $\mathcal{D}$. Indeed, Lemma \ref{Lem:KO} implies that the only condition for this is that the spin structure on any bounding loop is bounding. Since $k>2$, the space $\overline{\mathcal{F}}(0\beta)$ has dimension $2k+1 < 6k-7$. Lemma \ref{Lem:CoherTrivExist} then implies that there is a stable trivialization of $T\overline{\mathcal{F}}(0\beta)$ which is compatible with (meaning homotopic to) the stable trivialization induced by the triple $\index(\beta)$, $\mathbf{I}^{\ast}_{-\beta}$ and $TL$ near the outer boundary component $\mathcal{N}$. It follows that the stable trivialization over the collar region extends both to the Floer moduli space $\mathcal{F}_{\rho_0}(0\beta)$ and to the capping space $\mathcal{T}$, so we conclude that $T\mathcal{B}$ is stably trivial. Since $\partial\mathcal{B}=L\ne \varnothing$, stable triviality of $T\mathcal{B}$ implies $\mathcal{B}$ is parallelizable, cf.~\cite[Lemma 3.5]{KM}. It remains to show that if $L$ bounds a parallelizable manifold then so does the original homotopy sphere $K$. By construction, $L$ was obtained from $K$ by adding a $1$-handle, so $K$ may be obtained from $L$ by adding a canceling 2-handle. There are two possibilities: the attaching circle of this 2-handle is trivial in $H_1(\mathcal{B};\mathbb{Z}_2)$, meaning the circle mod 2 bounds, or it is non-trivial. In the former case the trivialization of $T\mathcal{B}$ along the attaching sphere necessarily corresponds to the bounding spin structure, and hence extends over the 2-handle. In the latter case, up to homotopy we may choose the trivialization of $T\mathcal{B}$ freely over the attaching circle, hence may choose it to correspond to the null-cobordant spin structure, and hence the trivialization extends over the 2-handle in this case as well. It follows that $K$ bounds a parallelizable manifold. \end{proof} The proofs of the corollaries use the following result. \begin{Lemma}\label{Lem:forCor} Let $M \subset \mathbb{C}^{8k}$ be a monotone Lagrangian submanifold {\sc pl}-homeomorphic to $S^{1}\times S^{8k-1}$ and of Maslov number $8k$. Then there is a {\sc pl}-isotopy $\phi_t\colon \mathbb{C}^{8k}\to\mathbb{C}^{8k}$ such that $\phi_0=\mathrm{id}$ and $\phi_1(M)=W'$, the surgery on the Whitney sphere, and a homotopy of stable Gauss maps $\psi_t\colon M\to U/O$ such that $\psi_0=G_M$ and $\psi_1=G_{W'}\circ\phi_1$. \end{Lemma} \begin{proof} The existence of $\phi_t$ follows from a repetition of the corresponding argument in Lemma \ref{Lem:GaussInterpolate}. The homotopy of stable Gauss maps can then be constructed as follows. Since $\pi_{8k-1}(U/O)=0$, we can find the desired homotopy over $\{pt\}\times S^{8k-1}$. The Maslov number then allows us to extend the homotopy further over $S^{1}\vee S^{8k-1}$. What remains is the extension over an $(8k+1)$-cell. Using the triviality of the tangent bundle we lift the map to $U$ and use $\pi_{8k}(U)=0$. \end{proof} \begin{proof}[Proof of Corollary \ref{Cor:First}] Lemma \ref{Lem:forCor} implies that the index bundles determined by $L$ are homotopically the same as those determined by $W'$. The proof of Theorem \ref{Thm:Main} then shows that $L$ bounds a parallelizable manifold. \end{proof} \begin{proof}[Proof of Corollary \ref{Cor:Main}] Recall that $P= S^1 \times S^{8k-1}$. Suppose that $\Sigma$ is an exotic $8k$-sphere with the property that \[ (T^*(\Sigma \# P), d\theta_{\rm can}) \ \cong \ (T^*P, d\theta_{\rm can}) \] are symplectomorphic when equipped with their canonical symplectic structures. Composing such a symplectomorphism with a suitable translation in the $T^*S^1$-factor of $T^*P = T^*S^1 \times T^*S^{8k-1}$, we obtain a symplectomorphism $T^*(\Sigma \# P) \rightarrow T^*P$ of vanishing flux, which is therefore Hamiltonian isotopic to an exact symplectomorphism. It then follows that $\Sigma \# P$ admits an exact Lagrangian embedding \[ \sigma\colon \ \Sigma \# P \ \hookrightarrow \ T^*P \] in $T^*P$ in the homology class of the zero-section. Results of Abouzaid and Kragh imply that any closed exact Lagrangian in the cotangent bundle has vanishing Maslov class \cite[Proof of Theorem E.2]{Kragh}. We now view $W' \approx P\hookrightarrow \mathbb{C}^{2k}$ as an embedded Lagrangian submanifold, via surgery on the Whitney sphere. By composing $\sigma$ with a conformal symplectomorphism which radially shrinks the fibers, if necessary, we can assume that it has image inside the disk bundle $D_{\varepsilon}(T^*P)$ for any given $\varepsilon>0$ (measured with respect to an arbitrary choice of metric). By Weinstein's theorem, such small disk bundles embed symplectically into $\mathbb{C}^{8k}$, so we obtain a Lagrangian embedding \[ \sigma'\colon \Sigma \# P \ \hookrightarrow \ \mathbb{C}^{8k}. \] Since $\sigma$ had Maslov class zero, the Maslov class of $\sigma'$ is obtained by restriction from the embedding $D_{\varepsilon}(T^*P) \subset \mathbb{C}^{8k}$, hence $\Sigma \# P \subset \mathbb{C}^{8k}$ has Maslov class $8k$ (equal to that of $P=W'$). Corollary \ref{Cor:First} therefore implies that $\Sigma$ bounds a parallelizable manifold, which (via Kervaire and Milnor) gives Corollary \ref{Cor:Main}. \end{proof} \section{Moduli spaces -- set up and basic properties} \label{Sec:basicsetup} In this section we present the basic functional analytic set up for moduli spaces. Much of the material is standard and appears (with small variations) in many places, see e.g.~\cite{EES1}. We are including it in order to make the $C^{1}$-structures on moduli spaces, which are of central importance to our main results, maximally explicit. Proofs, however, will sometimes be sketched. Throughout this section we will write $L\subset\mathbb{C}^{n}$ for a Lagrangian submanifold diffeomorphic to the mapping torus of the antipodal map $(S^{1}\,\widetilde{\times}\,S^{n-1})\#\Sigma$, where $\Sigma$ is a homotopy $n$-sphere, with minimal Maslov number $n$. We write $\beta\in\pi_1(L)=H_1(L)\cong \mathbb{Z}$ for the generator of Maslov index $n$. Note that the Lagrangian submanifold $L_+$ in Lemma \ref{Lem:Maslovnumber} constructed by Lagrange surgery from an immersion $f\colon K\to\mathbb{C}^{2k}$ with exactly one transverse double point has the properties of $L$, with homotopy sphere $\Sigma$ diffeomorphic to $K$, see Corollary \ref{Cor:LandKdiffeo}, and that in even dimensions the mapping torus is a product. \subsection{Geometric preliminaries} \label{sec:Geomprel} We discuss two constructions in the underlying geometry of $L\subset\mathbb{C}^{n}$ that will simplify our functional analytic treatment of (Floer-)holomorphic disks. Choose a real analytic structure on $L$. \begin{Lemma}\label{Lem:reanbdry} Real analytic Lagrangian embeddings of $L$ into $\mathbb{C}^{n}$ are dense in the space of all Lagrangian embeddings. \end{Lemma} \begin{proof} Consider a Lagrangian embedding $\phi\colon L\to\mathbb{C}^{n}$. Let $\mathfrak{a}=\int_{\beta} \phi^{\ast}(y\cdot dx)$. Then $L$ admits a Legendrian lift into $\mathbb{C}^{n}\times (\mathbb{R}/\mathfrak{a}\mathbb{Z})$ with contact form $dz-y\cdot dx$. Furthermore, the lift is unique up to translation in the added circle direction. After arbitrarily small perturbation of $\phi$, the projection $\phi(L)\to\mathbb{R}^{n}\times(\mathbb{R}/\mathfrak{a}\mathbb{Z})$ is a front with front-generic singularities. Such fronts determine the corresponding Lagrangian embeddings uniquely (using $y_j=\frac{\partial z}{\partial x_j}$). The fronts arising from approximations of the projection by real analytic maps $L\to\mathbb{R}^{n}\times(\mathbb{R}/\mathfrak{a}\mathbb{Z})$ are still generic, hence give real analytic Lagrangian embeddings arbitrarily near $\phi$. \end{proof} Let $B_{\mathbb{C}^{n}}(r_q)$ denote the ball of radius $r_q>0$ around $0\in\mathbb{C}^{n}$, and let $B_{\mathbb{R}^{n}}(r_q)=B_{\mathbb{C}^{n}}(r_q)\cap \mathbb{R}^{n}$. If $L\subset \mathbb{C}^{n}$ is a real analytic Lagrangian submanifold then any point $q\in L$ has a neighborhood $(W_q, L \cap W_q) \subset (\mathbb{C}^{n},L)$ that is bi-holomorphic via $\psi_q$ to $(B_{\mathbb{C}^{n}}(r_q), B_{\mathbb{R}^n}(r_q))$. Furthermore, by compactness of $L$, we may take $r_q=r'>0$, where $r'$ is independent of $q$. We will use the following notation below. Take $0<r<\frac{r'}{\sqrt{2}}$ and define a product neighborhood of $q\in L$: \begin{equation}\label{Eq:disk^2nbhd} U_q=\psi_q\left(B_{\mathbb{R}^{n}}(r)\times iB_{\mathbb{R}^{n}}(r)\right). \end{equation} We next review a construction in \cite[Section 5.2]{EES1}. Consider the restriction $T_L(TL)$ of the tangent bundle of $TL$ to the $0$-section $L\subset TL$. Let $J_{L}\colon T_L(TL)\to T_L(TL)$ denote the natural complex structure which maps a horizontal vector tangent to $L\subset TL$ at $q\in L$ to the corresponding vector tangent to the fiber $T_qL\subset TL$ at $0\in T_qL$. Using Taylor expansion in the fiber directions, it is straightforward to check that the inclusion $\iota\colon L\to \mathbb{C}^{n}$ admits an extension $P\colon U\to \mathbb{C}^{n}$, where $U\subset TL$ is a neighborhood of the $0$-section, such that $P$ is an immersion with $J_0\circ dP = dP\circ J_L$ along the $0$-section $L\subset U$. (Recall that $J_0$ denotes the standard complex structure on $\mathbb{C}^n$.) We next construct a metric $\hat g$ on a neighborhood of the $0$-section in $TL$. Fix a Riemannian metric $g$ on $L$. Let $v\in TL$ with $\pi(v)=q$. Let $X$ be a tangent vector to $TL$ at $v$. The Levi-Civita connection of $g$ gives the decomposition $X=X_{H}+X_{V}$, where $X_{V}$ is a vertical tangent vector, tangent to the fiber, and where $X_{H}$ lies in the horizontal subspace at $v$ determined by the connection. Since $X_V$ is a vector in $T_qL$ with its endpoint at $v\in T_qL$ we can translate it linearly to the origin $0\in T_qL$; we also use $X_V$ to denote this translated vector. Write $\pi X\in T_qL$ for the image of $X$ under the differential of the projection $\pi\colon TL\to L$. Let $R$ denote the curvature tensor of $g$ and define the field $\hat g$ of quadratic forms along $TL$ as follows: \begin{equation} \hat{g}(v)(X,Y)=g(q)(\pi X,\pi Y)+g(q)(X^{V},Y^{V})+g(q)(R(\pi X,v)\pi Y,v), \end{equation} where $v\in TL$, $\pi(v)=q$, and $X,Y\in T_v(TL)$. \begin{Lemma} There exists $\delta>0$ such that $\hat g$ is a Riemannian metric on $\{v\in TL\colon g(v,v)< \delta\}$. In this metric the $0$-section $L$ is totally geodesic and the geodesics of $\hat g$ in $L$ are exactly those of the original metric $g$. Moreover, if $\gamma$ is a geodesic in $L$ and $X$ is a vector field in $T(TL)$ along $\gamma$ then $X$ is a Jacobi field if and only if $J_L X$ is. \end{Lemma} \begin{proof} This is \cite[Proposition 5.3]{EES1}. \end{proof} Consider the immersion $P\colon U\to \mathbb{C}^{n}$, where $U$ is a neighborhood of the $0$-section in $TL$, with $J_0\circ dP=dP\circ J_L$. The push forward under $dP$ of the metric $\hat g$ gives a metric on a neighborhood of $L$ in $\mathbb{C}^{n}$. Extend the metric $\hat g$ to a metric, still denoted $\hat g$, on all of $\mathbb{C}^{n}$ which we take to agree with the standard flat metric on $\mathbb{C}^{n}$ outside a (slightly larger) neighborhood of $L$. We write $\exp\colon T\mathbb{C}^{n}\to\mathbb{C}^{n}$ for the exponential map in the metric $\hat g$. Since $L$ is totally geodesic for $\hat g$, $\exp$ takes tangent vectors to $L$ to points on $L$. \subsection{Configuration spaces for closed disks} \label{sec:closeddisks} In this section we describe a Banach manifold set-up for the study of \eqref{Eqn:CR1}, and derive basic properties of the solution spaces. We point out that if $X_H=0$, the operator in the right hand side of \eqref{Eqn:CR1} reduces to the standard $\bar\partial$-operator, $du^{0,1}=\bar\partial u$, whose solutions are $J$-holomorphic disks. Our framework covers the cases $X_H = 0$ and $X_H \neq 0$ simultaneously. Consider the unit disk $D\subset \mathbb{C}$ with the standard metric and let $\gamma_r$, $r\in[0,\infty)$ and $H$ be as in Section \ref{ssec:ham}. Let $\mathcal{H}^{s}(D,\mathbb{C}^{n})$ denote the Sobolev space of $\mathbb{C}^{n}$-valued functions with $s$ derivatives in $L^{2}$ with the Sobolev $s$-norm (viewed as a vector space over $\mathbb{R}$). More specifically, we consider the closure in the $s$-norm on $D$ of functions that extend smoothly to the double of $D$. We write $\mathcal{H}^{s}(\partial D,\mathbb{C}^{n})$ for the Sobolev space of functions on the boundary $\partial D$ with respect to the induced metric. If $s>\frac12$ then there is a continuous restriction (or trace) map \[ \mathcal{H}^{s}(D,\mathbb{C}^{n})\to\mathcal{H}^{s-\frac12}(\partial D, \mathbb{C}^{n}),\quad u\mapsto u|_{\partial D}. \] Recall that elements in $\mathcal{H}^{s}(D,\mathbb{C}^{n})$ (respectively $\mathcal{H}^{s}(\partial D,\mathbb{C}^{n})$) are uniquely represented by $C^{l}$-functions, where $l$ is the largest integer such that $l<s-1$ (respectively such that $l<s-\frac{1}{2}$). The metrics on $D$ and on $\mathbb{C}^{n}$ give metrics on all bundles of $\mathbb{C}^{n}$-valued tensors on $D$. Let $\mathrm{Hom}(TD,\mathbb{C}^{n})$ and $\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n})$ denote the bundles over $D$ with fiber at $z\in D$ equal to the space of linear maps $T_{z}D\to\mathbb{C}^{n}$ and $(i,J)$-complex anti-linear maps $T_{z}D\to\mathbb{C}^{n}$, respectively. We write $\mathcal{H}^{s}(D,\mathrm{Hom}(TD,\mathbb{C}^{n}))$ and $\mathcal{H}^{s}(D,\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n}))$ for the Sobolev spaces of sections with $s$ derivatives in $L^{2}$ of the bundles indicated. We will also use the subspace $\dot\mathcal{H}^{1}(D,\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n}))$ of sections that vanish on the boundary: \begin{equation}\label{Eq:sblvdbar=0onbdry} \dot\mathcal{H}^{1}(D,\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n}))= \left\{ A\in \mathcal{H}^{1}(D,\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n}))\colon A|_{\partial D}=0 \right\}. \end{equation} The configuration space for \eqref{Eqn:CR1} is the subset $\mathcal{X}\subset\mathcal{H}^{2}(D,\mathbb{C}^{n})$ of all $u\in\mathcal{H}^{2}(D,\mathbb{C}^{n})$ that satisfy the following conditions: \begin{enumerate} \item $u|_{\partial D}(z)\in L\subset\mathbb{C}^{n}$ for all $z\in\partial D$. \item $(du)^{0,1}|_{\partial D}=0\in\mathcal{H}^{\frac{1}{2}}(\partial D,\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n}))$. \end{enumerate} \begin{Lemma}\label{Lem:expmapclosed} The subset $\mathcal{X}\subset\mathcal{H}^{2}(D,\mathbb{C}^{n})$ is closed and is a Banach submanifold. The tangent space $T_{u}\mathcal{X}$ at $u$ is the subspace of vector fields $v\in\mathcal{H}^{2}(D,\mathbb{C}^{n})$ that satisfy \begin{enumerate} \item $v(z)\in T_{u(z)}L$ for all $z\in\partial D$ and \item $(\nabla v)^{0,1}|_{\partial D}=0\in\mathcal{H}^{\frac{1}{2}}(\partial D,\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n}))$, \end{enumerate} where $\nabla$ denotes the Levi-Civita connection of the metric $\hat g$, see Section \ref{sec:Geomprel}. Furthermore, the map $\operatorname{Exp}\colon T_{u}\mathcal{X}\to\mathcal{X}$, \[ [\operatorname{Exp}(v)](z)=\exp_{u(z)}(v(z)),\quad z\in D, \] gives local $C^{1}$-coordinates near $u$ when restricted to $v$ in a sufficiently small ball around $0 \in T_{u}\mathcal{X}$. \end{Lemma} \begin{proof} This is a simpler version of \cite[Lemma 3.2]{EES} (see \cite[Proposition 5.9]{EES1} for the details of the standard argument alluded to there). \end{proof} The connected components of the space $\mathcal{X}$ are in $1-1$ correspondence with $\pi_2(\mathbb{C}^{n},L)\cong H_1(L)$. For $j\in\mathbb{Z}$, we write $\mathcal{X}(j\beta)$ for the connected component of all $u\in\mathcal{X}$ such that the continuous loop $u|_{\partial D}$ represents the class $j\cdot\beta$. The target space for the operator in \eqref{Eqn:CR1} corresponding to $\mathcal{X}$ is $\mathcal{Y} = \dot\mathcal{H}^{1}(D, \mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n}))$. Consider the operator corresponding to the Floer equation, $\bar\partial_{{\,\mathrm{F}}}\colon \mathcal{X}\to\mathcal{Y}$, \[ \bar\partial_{{\,\mathrm{F}}}(u)=(du + \gamma_r\otimes X_{H})^{0,1}, \] where $r\in[0,\infty)$ and write $\bar\partial_{{\,\mathrm{F}};j\beta}=\bar\partial_{{\,\mathrm{F}}}|_{\mathcal{X}(j\beta)}$. \begin{Lemma}\label{Lem:Floermapindexclosed} The map $\bar\partial_{{\,\mathrm{F}};j\beta}$ is a $C^{1}$ Fredholm map of index \[ \mathrm{ind}(\bar\partial_{{\,\mathrm{F}};j\beta})=n(1+j), \] which depends smoothly on the parameter $r\in[0,\infty)$. \end{Lemma} \begin{proof} The linearization of $\bar\partial_{{\,\mathrm{F}}}$ is \[ D(\bar\partial_{{\,\mathrm{F}}})=\bar\partial v + K v, \] where $v\in T_u\mathcal{X}\subset \mathcal{H}^{2}(D,\mathbb{C}^{n})$ and where the operator $K$ is compact. The Lagrangian boundary condition is the loop $T_{u(z)}L$, $z\in\partial D$ which has Maslov index $jn$ if $u\in\mathcal{X}(j\beta)$. The lemma follows from the well-known index formula for the $\bar\partial_{\Lambda}$-operator on the disk with $n$-dimensional Lagrangian boundary condition $\Lambda(z)$, $z\in\partial D$ of Maslov index $\mu(\Lambda)$: $\mathrm{ind}(\bar\partial_{\Lambda})=n+\mu(\Lambda)$. \end{proof} Let $\mathcal{F}^{r}(j\beta)={\bar\partial_{{\,\mathrm{F}};j\beta}}^{-1}(0)$, with $r$ the parameter of the Hamiltonian term in the operator. Let \[ \mathcal{F}(j\beta) \ =\bigcup_{r\in[0,\infty)}\mathcal{F}^{r}(j\beta)\times\{r\} \ \ \ \subset \ \ \ \mathcal{X}\times[0,\infty). \] In the case of trivial Hamiltonian term, i.e.~when $X_{H}=0$, we write $\bar\partial$ instead of $\bar\partial_{{\,\mathrm{F}}}$; the solution space $\bar\partial^{-1}(0)\subset\mathcal{X}$ consists of $J$-holomorphic disks, and carries an action of the group $G$ of conformal automorphisms of the disk. We write \begin{equation} \mathcal{M}(j\beta)= \mathcal{F}^0({j\beta})/G \end{equation} \subsection{Basic structure of moduli spaces} \label{Sec:BasicStructure} In this section we collect standard properties of moduli spaces of Floer disks that follow from Floer-Gromov compactness and transversality. Since the moduli spaces $\mathcal{M}(j\beta)$ of holomorphic disks are defined as quotients, in order to get a $C^{1}$-structure we need to fix gauge. To this end we start with a discussion of marked points. Consider an element $\hat u$ in $\mathcal{M}(\beta)$ represented by a map $u\colon (D,\partial D)\to (\mathbb{C}^{n},L)$. Since $u$ is non-constant, $u|_{\partial D}$ is non-constant and there exists an arc $A\subset \partial D$ where $u|_{\partial D}$ is an immersion. Fix a codimension 1 hypersurface $T\subset L$ such that $u(\zeta)\in T$, $du(\zeta)\ne 0$ and with $T$ transverse to $u$ at $u(\zeta)$, for $\zeta$ in some finite subset $\boldsymbol{\zeta}\subset \partial D$. Explicitly, we take $T= T_1\cup T_2\cup T_3$ to be three small disjoint $(n-1)$-disks intersecting $u|_{\partial D}$ transversely at $u(\zeta_1')$, $u(\zeta_2')$, and $u(\zeta_3')$. Write $u_{T}=u\circ\phi$ where $\phi\colon D\to D$ is the unique conformal map such that $\phi(1)=\zeta_1'$, $\phi(i)=\zeta_2'$, and $\phi(-1)=\zeta_3'$. For simpler notation write $\zeta_1=1$, $\zeta_2=i$, and $\zeta_3=-1$. Then there is a neighborhood $W(u_{T})\subset\mathcal{X}(\beta)$ such that any $w\in \bar\partial^{-1}(0)\cap W(u_{T})$ intersects $T_j$ transversely at a point $\zeta_j'\in\partial D$ close to $\zeta_j$, $j=1,2,3$. To see this, we observe that the norm in $\mathcal{X}(\beta)$ controls the $C^{0}$-norm which in turn controls all other norms for holomorphic functions. In particular, by shrinking $W(u_T)$ we infer that $w$ is arbitrarily $C^{1}$-close to $u$, and hence a neighborhood as claimed exists. We obtain local coordinates on an open set of $\mathcal{M}(\beta)$ around $\hat u$ by identifying the neighborhood with \[ \bar\partial^{-1}(0)\cap \{w\in W(u_{T})\colon w(\zeta_j)\in T_j,\,j=1,2,3\}. \] Since all holomorphic disks in $W(u_{T})$ meet $T_j$ transversely, the action of the conformal group $G$ of $D$ is transverse to the second factor of the intersection, so provided $\bar\partial$ is itself transverse near $\hat u$ we get a $C^{1}$-smooth chart. We next consider coordinate changes. Thus let $T$ and $T'$ be two collections of hypersurfaces as above. On the overlap of the corresponding open subsets, the coordinate change is given by the reparametrization $w_{T}=w_{T'}\circ \phi$, where $\phi\colon D\to D$ is the unique conformal automorphism of $D$ such that $\phi(\zeta_j)=\zeta'_j$, where $\zeta'_j$ are points such that $w_T(\zeta_j')\in T_j$. To see that this gives a $C^{1}$-map we simply note that the $C^{1}$-norm of the location of the marked points is controlled by the $C^{1}$-norm of $u_T$ which in turn is controlled by the norm in $\mathcal{X}(\beta)$ since the functions are holomorphic. More specifically, let $u\colon D\to\mathbb{C}^{n}$ be holomorphic with $u(\{1,i,-1\})\notin T'$ and suppose $u$ intersects $T'$ transversely at the points $\xi_1,\xi_2,\xi_3$. Then there exists $\phi\in G$ such that if $\hat u=u\circ \phi$ then $u(\{1,i,-1\})\in T'$. Note that the derivative of the $\xi_j$-component of the inverse map in direction of the gauge orbit has components \[ \frac{\partial \xi_j}{\partial \phi}=- \left\langle\nu,du_{\xi_j}\left(\frac{\partial \phi}{\partial v}\right)\right\rangle, \] where $\nu$ is the normal of $T'$ at $u(\xi_j)$ and where $v$ is the vector tangent to the boundary at $\xi$. In conclusion we find that transition functions are smooth; picking a finite cover of the compact Hausdorff space $\mathcal{M}(\beta)$ we get a $C^{1}$-smooth structure. Note that the argument used to show that transition functions are smooth, in combination with the existence of a common refinement of any two finite covers, shows that the $C^{1}$-structure is independent of the cover. \begin{Lemma}\label{Lem:tvholdisks} For a generic almost complex structure $J\in\mathcal{J}_{L}$ on $\mathbb{C}^{n}$, the moduli spaces $\mathcal{M}(j\beta)$ are empty for $j<0$, $\mathcal{M}(0\beta)$ consists of constant maps, and $\mathcal{M}(\beta)$ is a compact $C^{1}$-manifold of dimension $2n-3$. \end{Lemma} \begin{proof} The first and second statements hold since the symplectic area of a curve in class $j\beta$ is negative when $j<0$ and zero if $j=0$. For $C^{1}$-smoothness of $\mathcal{M}(\beta)$, given the definition of the $C^{1}$-structure above, we need only show that the linearization $D(\bar\partial)$ is generically surjective. For disks in primitive homology classes, this is due to Oh \cite{Oh:Perturb}. Finally, by Gromov-Floer compactness the space $\mathcal{M}(\beta)$ is compact modulo bubbling. The homology classes of the disks in a bubble tree must sum to $\beta$, but each disk lies in homology class $j\beta$, $j>0$. Therefore the bubble tree must be trivial and the space is compact. \end{proof} For the perturbed equation the smooth structure of moduli spaces is more straightforward, and we have the following result. \begin{Lemma}\label{Lem:tvmdli} For generic Hamiltonian and for an almost complex structure $J\in\mathcal{J}_L$ on $\mathbb{C}^{n}$ for which Lemma \ref{Lem:tvholdisks} holds: \begin{enumerate} \item The moduli space $\mathcal{F}(0\beta)$ is a transversely cut out $(n+1)$-manifold with boundary. Its boundary is canonically diffeomorphic to $L$, $\partial \mathcal{F}(0\beta)=\mathcal{F}^{0}(0\beta)=L$, viewed as the space of constant maps into $L$. \item The moduli space $\mathcal{F}(-\beta)$ is a transversely cut out closed manifold of dimension $1$. \item The moduli space $\mathcal{F}(j\beta)=\varnothing$ for $j<0$. \end{enumerate} \end{Lemma} \begin{proof} The transversality properties follow from standard arguments perturbing the Hamiltonian. It remains to show the compactness statements. It follows from Gromov-Floer compactness that $\mathcal{F}(-\beta)$ is compact up to bubbling of holomorphic disks. By transversality, any bubble must lie in $\mathcal{M}(j\beta)$ for $j\ge 1$, which implies that the Floer disk in the limit lies in $\mathcal{F}(l\beta)$, $l<-1$; but these spaces are empty, so $\mathcal{F}(-\beta)$ is compact. \end{proof} It is straightforward to include a boundary marked point on solutions. First we consider a version of the moduli space $\mathcal{F}(j\beta)$ with a marked point on the boundary. Write $\mathcal{F}^{\ast}(j\beta)$ for this space and note that it is a product: \[ \mathcal{F}^{\ast}(j\beta)=\mathcal{F}(j\beta)\times\partial D, \] where the first factor encodes the map and the second the location of the marked point. In particular, the $C^{1}$-smooth structure of $\mathcal{F}(j\beta)$ gives a $C^{1}$-smooth structure on $\mathcal{F}^{\ast}(j\beta)$. Also, there is a natural smooth evaluation map $\operatorname{ev}\colon\mathcal{F}^\ast(j\beta)\to L$, \[ \operatorname{ev}((u,\zeta))=u(\zeta). \] Second we consider the space $\mathcal{M}^{\ast}(\beta)$ which is a version of $\mathcal{M}(\beta)$ with a marked point on the boundary. More precisely, let $G_{1}\subset G$ denote the subgroup consisting of conformal automorphisms of $D$ that fix $1\in\partial D$, and define \[ \mathcal{M}^{\ast}(\beta)= \mathcal{F}^{0}(\beta)/G_{1}. \] Again there is an induced $C^{1}$-smooth structure and a natural evaluation map $\operatorname{ev}\colon\mathcal{M}^{\ast}(\beta)\to L$, \[ \operatorname{ev}(u)=u(1). \] The following result describes the broken disks which are added to compactify $\mathcal{F}(0\beta)$. \begin{Lemma}\label{Lem:bubbles} For a generic Hamiltonian and almost complex structure $J \in \mathcal{J}_L$ the boundary of $\mathcal{F}(0\beta)$ in the Gromov-Floer compactification consists of two-level broken curves, with top level $(u_1,\zeta)\in\mathcal{F}^{\ast}(-\beta)$, lower level $u_2\in \mathcal{M}^{\ast}(\beta)$, and with $\operatorname{ev}(u_1,\zeta)=\operatorname{ev}(u_2)$. \end{Lemma} \begin{proof} The non-compactness of $\mathcal{F}(0\beta)$ comes from holomorphic disks bubbling off at boundary points. By Lemma \ref{Lem:tvmdli} the only possible bubble is one holomorphic disk in $\mathcal{M}(\beta)$. This implies that the other component of the broken configuration is a disk $u_1\in\mathcal{F}(-\beta)$. The marked point $\zeta\in\partial D$ is the point where the holomorphic disk is attached, and by our definition of marked points for holomorphic disks, the domain $D$ of the holomorphic bubble disk is attached at $1\in\partial D$ to the domain of $u_1$ at $\zeta$. \end{proof} \begin{Remark} As we shall see later, the Gromov-Floer boundary is diffeomorphic to the (generically) transverse fibered product \[ \mathcal{F}^{\ast}(-\beta)\times_{L} \mathcal{M}^{\ast}(\beta)=(\operatorname{ev}\times\operatorname{ev})^{-1}(\Delta_L), \] where $\Delta_L\subset L\times L$ is the diagonal. This identification follows from a gluing result which uniquely constructs all unbroken Floer disks near any broken configuration. The identification can be studied with various degrees of accuracy: one can view it as a $C^{0}$-statement and obtain the compactification as a topological space, or one can carry out the gluing more carefully to identify the compactification as a $C^{1}$-manifold with boundary. Schemes for carrying $C^{1}$-data in gluing problems were worked out in \cite{FO3} and \cite{HWZ} in more complicated situations. In the terminology of \cite{HWZ}, the gluing problems studied in the present paper can be phrased in the language of M-polyfolds. We will present a $C^{1}$-identification in Section \ref{Sec:gluing2} below. \end{Remark} \subsection{Configuration spaces for disks with punctures and jet-conditions} \label{sec:punctures} In this section we present a functional analytic setting for describing disks with one boundary puncture, which we will use in our description of the boundary of the space of Floer disks $\mathcal{F}(0\beta)$. In fact, our description uses jet-conditions at the puncture and we present a set up for higher jet conditions that does not require imposing higher regularity conditions on the maps in the configuration space. Many constructions are similar to those in Section \ref{sec:closeddisks} and some of the details from there will not be repeated here. Let $\zeta\in\partial D$. In Section \ref{sec:cohertriv} we identified $(D,\zeta)$ with the upper half plane $(H,\infty)$, constructed a half strip neighborhood of $\infty$ in $H$ and introduced a weight function $\mathbf{e}_{\eta}$ depending on a real parameter $\eta$ and given by $e^{\eta|\tau|}$ for $\tau+it$ in the half strip neighborhood $[0,\infty)\times[0,1]$ around $\infty$. For $s>0$, we write $\mathcal{H}^{s}_\eta(D-\zeta,\mathbb{C}^{n})$ for the weighted Sobolev space of $\mathbb{C}^{n}$-valued functions with $s$ derivatives in $L^{2}$ weighted by $\mathbf{e}_\eta$, with the weighted Sobolev $s$-norm. More specifically, we consider the closure, in the weighted $s$-norm on $D-\zeta\approx H$ with respect to the cylindrical end metric $h$ weighted by $\mathbf{e}_{\eta}$, of functions that extend smoothly to the double of $D-\zeta$. In parallel with Section \ref{sec:closeddisks}, we will also use the Sobolev spaces $\mathcal{H}^{s}_{\eta}(\partial D-\zeta,\mathbb{C}^{n})$ of functions on the boundary with respect to the metric induced by $h$ and weight induced by $\mathbf{e}_{\eta}$. Again, if $s>\frac12$, there is a continuous restriction map \[ \mathcal{H}^{s}_{\eta}(D-\zeta,\mathbb{C}^{n})\to\mathcal{H}^{s-\frac12}_{\eta}(\partial D-\zeta,\mathbb{C}^{n}),\quad u\mapsto u|_{\partial D-\zeta}. \] Furthermore, we use the spaces $\mathcal{H}^{s}_{\eta}(D-\zeta,\mathrm{Hom}(TD,\mathbb{C}^{n}))$, $\mathcal{H}^{s}_{\eta}\left(D-\zeta,\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n})\right)$, and $\dot\mathcal{H}^{1}_{\eta}\left(D-\zeta,\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n})\right)$, which are defined in analogy with the corresponding spaces in Section \ref{sec:closeddisks}, using the metric $h$ and weight $\mathbf{e}_{\eta}$. We next construct Banach manifolds that we use to build configuration spaces for punctured Floer disks. Fix $\eta>0$ and let $q\in\mathbb{C}^{n}$. Write $\mathcal{H}_{\eta}^{s}(D-\zeta,\mathbb{C}^{n};q)$ for the affine Sobolev space modeled on the vector space $\mathcal{H}_{\eta}^{s}(D-\zeta,\mathbb{C}^{n})$ with origin shifted to $q$. More formally, let $q$ denote the constant function on $D-\zeta$ with value $q\in\mathbb{C}^{n}$. Then elements $u\in\mathcal{H}_{\eta}^{s}(D-\zeta,\mathbb{C}^{n};q)$ are functions of the form $u=u'+q$ where $u'\in\mathcal{H}_{\eta}^{s}(D-\zeta,\mathbb{C}^{n})$, and the distance between $u_1$ and $u_2$ is $\|u_1-u_2\|_{s;\eta}$, where $\|\cdot\|_{s;\eta}$ is the norm in $\mathcal{H}^{s}_{\eta}(D-\zeta,\mathbb{C}^{n})$. Let $q\in L$ and consider the subspace $\mathcal{X}_{\eta}(\zeta,q)\subset\mathcal{H}^{2}_{\eta}(D-\zeta,\mathbb{C}^{n};q)$ of all $u$ which satisfy the following conditions: \begin{enumerate} \item $u|_{\partial D-\zeta}(z)\in L\subset\mathbb{C}^{n}$ for all $z\in\partial D-\zeta$. \item $(du)^{0,1}|_{\partial D-\zeta}=0\in\mathcal{H}^{\frac{1}{2}}_{\eta}(\partial D-\zeta,\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n}))$. \end{enumerate} \begin{Lemma} The subset $\mathcal{X}_{\eta}(\zeta,q)\subset\mathcal{H}^{2}_{\eta}(D-\zeta,\mathbb{C}^{n};q)$ is closed and is a Banach submanifold. The tangent space $T_{u}\mathcal{X}_{\eta}(\zeta,q)$ at $u$ is the subspace of vector fields $v\in\mathcal{H}^{2}_{\eta}(D-\zeta,\mathbb{C}^{n})$ such that \begin{enumerate} \item $v(z)\in T_{u(z)}L$ for all $z\in\partial D-\zeta$ and \item $(\nabla v)^{0,1}|_{\partial D-\zeta}=0\in\mathcal{H}^{\frac{1}{2}}(\partial D-\zeta,\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n}))$, \end{enumerate} where $\nabla$ denotes the Levi-Civita connection of the metric $\hat g$, see Section \ref{sec:Geomprel}. Furthermore, the map $\operatorname{Exp}\colon T_{u}\mathcal{X}_{\eta}(\zeta,q)\to\mathcal{X}_{\eta}(\zeta,q)$, \[ [\operatorname{Exp}(v)](z)=\exp_{u(z)}(v(z)),\quad z\in D-\zeta, \] gives local $C^{1}$-coordinates near $u$ when restricted to $v$ in a sufficiently small ball around $0\in T_{u}\mathcal{X}_{\eta}(\zeta,q)$. \end{Lemma} \begin{proof} As with Lemma \ref{Lem:expmapclosed}, this is a simpler version of \cite[Lemma 3.2]{EES}. \end{proof} The connected components of $\mathcal{X}_{\eta}(\zeta,q)$ are in $1-1$ correspondence with $\pi_2(\mathbb{C}^{n},L)\cong H_1(L)$, and for $j\in\mathbb{Z}$ we write $\mathcal{X}_{\eta}(j\beta;\zeta,q)$ for the connected component of all $u\in\mathcal{X}_{\eta}(\zeta,q)$ such that the continuous loop given by the arc $u|_{\partial D-\zeta}$ completed with the point $q$ represents the class $j\beta$. Our basic configuration spaces for punctured Floer disks are built by patching together functions in the Banach manifolds $\mathcal{X}_{\eta}(\zeta,q)$. To this end we first describe certain cut-off functions. Consider $q\in L$ and the neighborhood $U_q$ of $q$, see \eqref{Eq:disk^2nbhd} and use notation as there. Let $a\colon [0,r]\to[0,1]$ be a smooth decreasing function such that $a$ equals $1$ on $[0,\frac{r}{3}]$ and equals $0$ outside $[0,\frac{2r}{3}]$ and let $b\colon [-r,r]\to[-\frac{r^{2}}{100n},\frac{r^{2}}{100n}]$ be a function which has derivative equal to $1$ at $0$ and which equals $0$ outside $[-\frac{r}{100},\frac{r}{100}]$. Use coordinates $z=x+iy$ on $\mathbb{C}^n$, $x,y\in\mathbb{R}^{n}$. Let $\hat a(x)=a(|x|)$, and define the the cut off function $\alpha_q\colon U_q\to\mathbb{C}$ as follows \begin{equation}\label{Eq:cut-off} \alpha_{q}(x+iy)=\hat a(x)\hat a(y)+i\sum_{j=1}^{n}\frac{\partial \hat a}{\partial x_j}b(y_j). \end{equation} \begin{Lemma} \label{Lem:alphaq} The function $\alpha_q$ has the following properties: \begin{enumerate} \item $\alpha_q$ equals $1$ on $B_{\mathbb{R}^{n}}(\frac{r}{3})\times iB_{\mathbb{R}^{n}}(\frac{r}{3})$. \item $\alpha_q$ equals $0$ outside $B_{\mathbb{R}^{n}}(\frac{r}{3})\times iB_{\mathbb{R}^{n}}(\frac{r}{3})$. \item $\alpha_q$ is real-valued on $U_q\cap L$ \item $\alpha_q$ is holomorphic, $d\alpha+i\circ d\alpha \circ J=0$, along $L\cap U_q$. \end{enumerate} \end{Lemma} \begin{proof} Straightforward. \end{proof} By $(2)$ we extend $\alpha_q$ smoothly by $0$ to all of $\mathbb{C}^{n}$. It then follows that it has properties $(3)$ and $(4)$ with $U_q$ replaced by $\mathbb{C}^{n}$. We next define a family of diffeomorphisms parameterized by $L\times B_{\mathbb{R}^{n}}(r_0)$. For $z\in U_q$ write $w=\psi_q^{-1}(z)$ and define $\Phi_{q}[c_0]\colon (\mathbb{C}^{n},L)\to(\mathbb{C}^{n},L)$ as follows: \begin{equation}\label{Eq:diffeoatp} \Phi_{q}[c_0](z)= \begin{cases} z &\text{for } z\notin U_q,\\ \psi_q\left( w+\alpha(w)\cdot c_0 \right)&\text{for } z\in U_q. \end{cases} \end{equation} Then $\Phi_q[c_0]$ is a diffeomorphism for $r_0>0$ sufficiently small and it follows from Lemma \ref{Lem:alphaq} that $\Phi_q[c_0]$ is holomorphic along $L$. Now fix $\eta>0$ and consider the bundle \[ \mathcal{X}_{\eta}(\partial D,L)\to \partial D\times L, \] the fiber of which at $(\zeta,q)\in \partial D\times L$ equals $\mathcal{X}_{\eta}(\zeta,q)$. As we shall see, this bundle is a $C^{1}$-smooth locally trivial bundle and in particular the total space of the bundle is a Banach manifold. We construct local trivializations using the diffeomorphisms of \eqref{Eq:diffeoatp} and use notation as there. Let $V_p=\psi_p(B_{\mathbb{R}^{n}}(r_0))\subset L$ and let $A$ be an arc around $\zeta\in\partial D$. Define the trivialization \[ \phi\colon A\times V_p\times\mathcal{X}_{\eta}(\zeta,q)\to \mathcal{X}_{\eta}(\partial D,L) \] as follows: \begin{equation}\label{Eq:trivviadiffeo} \phi(\zeta',q', u)(z) = \Phi_{q}[\psi_q^{-1}(q')]\left(u\left(\tfrac{\zeta}{\zeta'} z\right)\right). \end{equation} It is straightforward to verify that coordinate changes are $C^{1}$-smooth. We will sometimes also consider $\mathcal{X}_{\eta}(\partial D, L)$ as a bundle over $\partial D$, composing the bundle projection to $\partial D\times L$ with the projection $\partial D \times L \rightarrow \partial D$, and we write \[ \mathcal{X}_{\eta}(L)=\mathcal{X}_{\eta}(\partial D, L)|_{1\in\partial D}. \] Furthermore, we will consider the sub-bundle $\mathcal{X}_{\eta}(j\beta;\partial D, L)$ with fiber over $(\zeta,q)$ equal to $\mathcal{X}_{\eta}(j\beta;\zeta,q)$ and the corresponding sub-bundle $\mathcal{X}_{\eta}(j\beta;L)$ of $\mathcal{X}_{\eta}(L)$. Similarly, we let \[ \mathcal{Y}_{\eta}(\zeta) = \dot\mathcal{H}^{1}_{\eta}(D-\zeta, \mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n})), \] and define $\mathcal{Y}_{\eta}(\partial D)$ as the bundle over $\partial D$ with fiber over $\zeta$ equal to $\mathcal{Y}_{\eta}(\zeta)$ and write $\mathcal{Y}_{\eta}=\mathcal{Y}_{\eta}(1)$. We define $C^{1}$-smooth local trivializations using re-parametrization as above. We next construct configuration spaces for disks with jet conditions at the boundary. Let $J^{m}(\partial D,L)$ denote the space of $m$-jets of maps $\partial D\to L$. This map fibers over the $0$-jet space $\partial D\times L$ with fiber $J^{m}_{(\zeta,q)}(L)$ over $(\zeta,q)\in\partial D\times L$ equal to $(\mathbb{R}^{n})^{m}$, with components corresponding to the coefficients of the Taylor expansion. Thus, in local coordinates $\mathbb{R}$ on $\partial D$ around $\zeta$ and $\mathbb{R}^{n}$ on $L$ around $q$, the element $\boldsymbol{c}=(c_1,\dots,c_m)$ represents the $m$-germ of the map \[ t\mapsto c_1t+c_2 t^{2}+\dots+ c_mt^{m}. \] We write $J^{m}(L)$ for the fiber of $J^{m}(\partial D,L)$ over $1\in\partial D$, so that $J^{m}(L)$ fibers over $L$ with fiber $(\mathbb{R}^{n})^{m}$. Let $\pi\colon J^{m}(\partial D,L)\to \partial D\times L$ denote the projection and fix $\delta$ with $0<\delta<\pi$. The configuration spaces for punctured disks that we will use are the following: \begin{equation} \mathcal{X}_{m\pi+\delta}(J^{m}(\partial D,L))=\pi^{\ast}\mathcal{X}_{m\pi+\delta}(\partial D, L) \end{equation} and \begin{equation} \mathcal{X}_{m\pi+\delta}(J^{m}(L))=\mathcal{X}_{m\pi+\delta}(J^{m}(\partial D,L))|_{1\in\partial D}. \end{equation} In order to use $\mathcal{X}_{m\pi+\delta}(J^{m}(\partial D,L))$ as configuration spaces for Floer type equations we will associate Sobolev functions to its elements. To this end we fix a smooth family of smooth maps parameterized by $\partial D\times J^{m}(L)$. More precisely, for $(\zeta,q)\in L$ and $\boldsymbol{c}\in J^{m}_{(\zeta,q)}(L)$ we define a map $v_{q}[\zeta,\boldsymbol{c}]\colon D-\zeta\to U_q$ with the following properties: \begin{enumerate} \item $v_q[\zeta,\boldsymbol{c}](\partial D-\zeta)\subset L$, \item $v_q[\zeta,\boldsymbol{c}](\partial D-\zeta)$ is holomorphic on the boundary, and \item there is $\rho>0$ such that in the strip neighborhood $[\rho,\infty)\times[0,1]$ of the puncture we have (in $U_q$-coordinates) \begin{equation} v_{q}[\zeta,\boldsymbol{c}](z)=\sum_{j=1}^{m} c_j e^{-j\pi z}. \end{equation} \end{enumerate} It is clear that there is a smooth family of functions with these properties. We then define \begin{equation}\label{Eq:weight+Taylor} \Psi^{m}\colon \mathcal{X}_{m\pi+\delta}(J^{m}(\partial D,L))\to\mathcal{X}_{\delta}(\partial D,L) \end{equation} fiber wise as follows: \[ \Psi^{m}[u](z)= u(z) + \alpha_q(u(z))\cdot v_q[\zeta;\boldsymbol{c}](z), \] where the addition refers to standard addition in the $\mathbb{C}^{n}$-coordinates of $U_q$ (note that $\alpha_q$ is supported well inside $U_q$). We consider the Floer equation for punctured disks in the setup described above. Fix $0<\delta<\pi$ and an integer $m\ge 0$. We start with the case of bundles over the $0$-jet space. Let $r\in[0,\infty)$ and consider the $\partial D$-bundle map $\bar\partial_{{\,\mathrm{F}}}\colon \mathcal{X}_{m\pi+\delta}(\partial D,L)\to\mathcal{Y}_{m\pi+\delta}(\partial D)$, \[ \bar\partial_{{\,\mathrm{F}}}(u)=(du + \gamma_r\otimes X_H)^{0,1}, \] and write $\bar\partial_{{\,\mathrm{F}};j\beta}=\bar\partial_{{\,\mathrm{F}}}|_{\mathcal{X}_{m\pi+\delta}(j\beta;\partial D,L)}$. As in Section \ref{sec:closeddisks}, in cases when the Hamiltonian term vanishes we write $\bar\partial$ instead of $\bar\partial_{{\,\mathrm{F}}}$ and we take the puncture to be fixed at $1\in\partial D$. \begin{Lemma}\label{Lem:Floermapindexpnctrs} The map $\bar\partial_{{\,\mathrm{F}};j\beta}\colon \mathcal{X}_{m\pi+\delta}(j\beta;\zeta,L)\to \mathcal{Y}_{m\pi+\delta}(\zeta)$ is a $C^{1}$ Fredholm map of index \[ \mathrm{ind}(\bar\partial_{{\,\mathrm{F}};j\beta})=n(1+j-m), \] which depends smoothly on $\zeta\in\partial D$ and $r\in[0,\infty)$. \end{Lemma} \begin{proof} The proof is a modification of the proof of Lemma \ref{Lem:Floermapindexclosed}. The index of the linearized operator equals the index of the ordinary $\bar\partial$-operator with positive exponential weight at the puncture $\zeta$ and with Lagrangian boundary condition the loop $T_{u(z)}L$, $z\in\partial D$, which has Maslov index $jn$ if $u\in\mathcal{X}_{m\pi+\delta}(j\beta;\partial D,L)$. The positive exponential weight lowers the index by $n\cdot (m+1)$ (in terms of spectral flow we pass the $m+1$ eigenvalues $0,\pi,\dots,m\pi$ of the asymptotic operator). There is an additional $n$-dimensional contribution to the index from the directions spanned by $L$ in the domain. We find that the total index equals $n+jn-(m+1)n+n$. \end{proof} Consider next the bundles over jet-spaces. Define \[ \bar\partial_{{\,\mathrm{F}}}^{(m)}\colon \mathcal{X}_{m\pi+\delta}(J^{m}(\partial D,L))\to \mathcal{Y}_{m\pi+\delta}(\partial D) \] as the composite \[ \bar\partial_{{\,\mathrm{F}}}^{(m)}=\bar\partial_{{\,\mathrm{F}}}\circ \Psi^{m}, \] where $\Psi^{m}\colon\mathcal{X}_{m\pi+\delta}(J^{m}(L))\to\mathcal{X}_{\delta}(\partial D,L)$ is the map in \eqref{Eq:weight+Taylor}. Note that it follows from the definition of $\Psi^{m}$ that $\bar\partial_{{\,\mathrm{F}}}^{(m)}(u)$ lies in the target space with weights as claimed: the function added to $u$ in \eqref{Eq:weight+Taylor} is holomorphic in the strip like end. We write $\bar\partial_{{\,\mathrm{F}};j\beta}^{(m)}$ for $\bar\partial_{{\,\mathrm{F}}}^{(m)}|_{\mathcal{X}_{m\pi+\delta}(J^{m}(j\beta;\partial D,L))}$. Following our usual practice, we write $\bar\partial^{(m)}$ and $\bar\partial^{(m)}_{j\beta}$, dropping the subscript ${\,\mathrm{F}}$ when the Hamiltonian term vanishes. \begin{Lemma}\label{Lem:Floermapjet} The map $\bar\partial^{(m)}_{{\,\mathrm{F}};j\beta}\colon \mathcal{X}_{m\pi+\delta}(J^{m}(\zeta,L))\to \mathcal{Y}_{m\pi+\delta}(\zeta)$ is a $C^{1}$ Fredholm map of index \[ \mathrm{ind}(\bar\partial^{(m)}_{{\,\mathrm{F}};j\beta})=n(1+j), \] which depends smoothly on $\zeta\in\partial D$ and $r\in[0,\infty)$. \end{Lemma} \begin{proof} This is a consequence of Lemma \ref{Lem:Floermapindexpnctrs}, with the $mn$ dimensions added in the domain corresponding to the fiber-dimension of $J^{m}(\partial D,L)\to\partial D\times L$. \end{proof} We next show that for any $m$, if $0$ denotes the $0$-section, the solution spaces $(\bar\partial^{(m)})^{-1}(0)$ are $C^{1}$-diffeomorphic to the moduli spaces of disks with marked points discussed in Section \ref{sec:closeddisks}. We first describe the smooth structure on the moduli space in the case of vanishing Hamiltonian. Recall that in this case we take the puncture fixed at $1\in\partial D$. The definition of the $C^{1}$-structure on $(\bar\partial^{(m)})^{-1}(0)/G_{1}$ parallels the corresponding definition for $\mathcal{M}$ in Section \ref{sec:closeddisks} and we give only a brief sketch. We use hypersurfaces $T=T_1\cup T_2$ that intersect a representative of a class transversely to fix gauge locally. (Note that only two marked points are now needed since the puncture is fixed by $G_1$.) The moduli space has a finite cover of such charts. To check that transition functions are smooth in the topology induced from the ambient configuration space, we must control the effect of conformal automorphisms on punctured disks. We thus consider the effect of moving the location of the auxiliary marked points on a punctured disk. Identify the domain of a punctured holomorphic disk $u$ with the upper half plane $H$, with puncture at $\infty$ that will be fixed throughout and two marked points from intersections with hypersurfaces at $-1,1$. If $\zeta_1,\zeta_2\in \partial H$ are intersection points with hypersurfaces arising from some other chart, the point corresponding to $u$ is $u\circ\phi$, where $\phi\colon H\to H$ is the unique automorphism with $\phi(1)=\zeta_1$ and $\phi(-1)=\zeta_2$. Any such $\phi$ is a composition of a translation, keeping the distance between the marked points fixed, and a scaling, keeping one of the marked points fixed and moving the other. A translation by a real constant $c$ in the upper half plane gives the coordinate change $\phi_c(w')=w+c$ on $H$. In the cylindrical ends, setting $w=e^{\pi z}$ and $w'=e^{\pi z'}$, the corresponding change of coordinates is \begin{equation}\label{Eq:movemarked1} z'=z+\frac{1}{\pi}\sum_{n>0} (-1)^{n+1}n^{-1} c^{n}e^{-n\pi z}, \end{equation} which induces a $C^{1}$-diffeomorphism of the charts. To move only one of the punctures, we may assume that the other one is at $0\in\partial H$ via the above, and then scale by a real positive constant $\sigma_r(w')=rw$. The corresponding coordinate change in the cylindrical end is then given by \begin{equation}\label{Eq:movemarked2} z'=z+\log r, \end{equation} which again induces a $C^{1}$-diffeomorphism. After these preliminaries we state the result that relates disks with and without punctures. \begin{Lemma}\label{Lem:markedandpctrs} For a generic non-trivial family of Hamiltonians $H_R$ we have, for $j\in\mathbb{Z}$, \[ (\bar\partial_{{\,\mathrm{F}};j\beta}^{(m)})^{-1}(0) \approx_{C^{1}} \mathcal{F}^{\ast}(j\beta). \] For trivial Hamiltonian we have, for $j\in\mathbb{Z}$: \[ (\bar\partial_{j\beta}^{(m)})^{-1}(0)/G_1\approx_{C^{1}}\mathcal{M}^{\ast}(j\beta). \] \end{Lemma} \begin{proof} The operator $F$ agrees with the standard $\bar\partial$-operator in a neighborhood of the boundary. Using $U_{u(\zeta)}$-coordinates, $\zeta\in\partial D$, we have solutions near $\zeta$ in the closed disk model (which we think of as the half plane around $0$) given by their Taylor expansion \[ u(z)=\sum_{k\ge 0} c_k z^{k}. \] The corresponding Fourier expansion for a strip like end, $z=e^{-\pi w}$, is \[ u(w)=c_0+\sum_{k>0} c_k e^{-\pi k w} , \] where $c_k\in\mathbb{R}^{n}$, $k\ge 0$. The Taylor coefficients of a holomorphic function are controlled by and control the $\|\cdot\|_2$-norm; similarly, the Fourier coefficients $c_j$, $j>0$, are controlled by the $\|\cdot\|_{2;\delta}$-norm, whilst $c_0$ is controlled by the topology of $L$. It follows that the natural map is $C^{1}$ with respect to the topologies induced from the ambient spaces. \end{proof} \section{The Floer-Gromov boundary of $\mathcal{F}(0\beta)$} \label{Sec:theboundary} It follows from Lemma \ref{Lem:bubbles} that the Floer-Gromov boundary of $\mathcal{F}(0\beta)$ is the fibered product \[ \mathcal{N}=\mathcal{F}^{\ast}(-\beta)\times_L \mathcal{M}^{\ast}(\beta). \] In this section we show that $\mathcal{N}$ is an orientable $C^{1}$-manifold for generic data, and then we consider gauge fixing of the second factor in broken disks that arise as limits of sequences of smooth disks in $\mathcal{F}(0\beta)$. \subsection{Transversality for fibered products} \label{sec:FF(-b)} By Lemma \ref{Lem:tvmdli}, $\mathcal{F}(-\beta)$ is an orientable closed $1$-manifold and consequently \[ \mathcal{F}^{\ast}(-\beta)=\mathcal{F}(-\beta)\times\partial D \] is a union of tori. As mentioned above there is a natural evaluation map \[ \operatorname{ev}\colon\mathcal{F}^{\ast}(-\beta)\to L,\quad \operatorname{ev}(u,\zeta)=u(\zeta). \] As explained in Section \ref{ssec:capping} we will use a filling of $\mathcal{F}^{\ast}(\beta)$. Thus let $\mathcal{D}$ be a collection of solid tori such that $\partial\mathcal{D}=\mathcal{F}^{\ast}(-\beta)$ and let $\operatorname{ev}\colon \mathcal{D}\to L$ be an extension of $\operatorname{ev}\colon\mathcal{F}^{\ast}(-\beta)\to L$ that is constant in a small collar neighborhood of the boundary. \begin{Lemma}\label{Lem:fiberprod1} For a generic family of Hamiltonians and almost complex structure such that Lemma \ref{Lem:tvmdli} holds, the map \[ \operatorname{ev}\times\operatorname{ev}\colon\mathcal{F}^{\ast}(-\beta)\times\mathcal{M}^{\ast}(\beta)\to L\times L \] is transverse to $\Delta_L$ and consequently $\mathcal{N}$ is a $C^{1}$-smooth orientable manifold. \end{Lemma} \begin{proof} Standard arguments show that for generic $H$ and $J$ the map \[ \mathbf{F}\colon\mathcal{X}_{\delta}(-\beta;\partial D, L)\times[0,\infty)\times\mathcal{X}_{\delta}(\beta; L)\to \mathcal{Y}_{\delta}(\partial D)\times \mathcal{Y}_{\delta}\times L\times L, \] given by \[ \mathbf{F}(u_1,u_2)=\left(\bar\partial_{{\,\mathrm{F}};-\beta}(u_1),\bar\partial_\beta(u_2),\operatorname{pr}_L(u_1),\operatorname{pr}_L(u_2)\right), \] where $\operatorname{pr}_L$ denote the bundle projection into $L$, is a Fredholm map of index $2 + 2n-2n=2$ which is transverse to the product of the $0$-sections and the diagonal. The fibered product is then $\mathcal{N}=\mathbf{F}^{-1}(\Delta_L)/G_1$, where $G_1$ acts on the second factor, and is an orientable $n$-manifold. \end{proof} \begin{Lemma}\label{Lem:fiberprod2} For a generic family of Hamiltonians and almost complex structure such that Lemmas \ref{Lem:tvmdli} and \ref{Lem:fiberprod1} hold, and for a generic extension $\operatorname{ev}\colon\mathcal{D}\to L$, the map \[ \operatorname{ev}\times\operatorname{ev}\colon\mathcal{D}\times\mathcal{M}^{\ast}(\beta)\to L\times L \] is transverse to $\Delta_L$ and consequently $\mathcal{T}$ is a smooth orientable manifold with $\partial \mathcal{T}=\mathcal{N}$. \end{Lemma} \begin{proof} Standard arguments show that for generic $\mathcal{D}$, after small perturbation of $H$ and $J$, the map \[ \mathbf{F}\colon\mathcal{D}\times\mathcal{X}_{\delta}(\beta; L)\to \mathcal{Y}_{\delta}(\partial D)\times L\times L, \] given by \[ \mathbf{F}(u_1,u_2)=\left(\bar\partial_\beta(u_2),\operatorname{ev}(\delta),\operatorname{ev}(u_2)\right), \] is a Fredholm map of index $3+2n-2n=3$ which is transverse to the product of the $0$-section and the diagonal. The fibered product is then $\mathcal{T}=\mathbf{F}^{-1}(\Delta_L)/G_1$, where $G_1$ acts on the second factor, and is an orientable $(n+1)$-manifold with boundary equal to the fibered product over $\partial\mathcal{D}$, which is $\mathcal{N}$. \end{proof} \begin{Remark} The $C^{1}$-structures of $\mathcal{N}$ and $\mathcal{T}$ are defined through finite open covers, in analogy with definition of the $C^{1}$-structure of $\mathcal{M}^{\ast}(\beta)$, to take account of the quotient in the second factor by the group $G_1$ of conformal automorphisms. \end{Remark} In order to find a $C^{1}$-collar neighborhood of the Gromov-Floer boundary of $\mathcal{F}(0\beta)$ we will require more detailed information on $\mathcal{N}$. We write elements $\xi\in\mathcal{N}$ as $\xi=((u_1,\zeta),u_2)$, where $(u_1,\zeta)\in\mathcal{F}^{\ast}(-\beta)=\mathcal{F}(-\beta)\times\partial D$ and $u_2\in\mathcal{M}^{\ast}(\beta)$ and where $\operatorname{ev}(u_1,\zeta)=\operatorname{ev}(u_2)$. Recall that $\operatorname{ev}(u_2)=u_2(1)$ and define \begin{equation}\label{Eq:1stder=0} \mathcal{N}_{0}=\{\xi\in\mathcal{N}\colon du_2(1)=0\}. \end{equation} \begin{Lemma}\label{Lem:singNN} For a generic Hamiltonian and almost complex structure such that Lemmas \ref{Lem:tvmdli} and \ref{Lem:fiberprod1} hold, $\mathcal{N}_0$ is a transversely cut out $0$-manifold. Furthermore, for any $\xi=((u_1,\zeta),u_2)\in\mathcal{N}_0$, the second derivative of $u_2$ at the marked point satisfies $d^{(2)}u_2(1)\ne 0$. \end{Lemma} \begin{proof} Consider the map \[ \mathbf{G}\colon \mathcal{X}_{\delta}(-\beta;\partial D,L)\times[0,\infty)\times \mathcal{X}_{2\pi+\delta}(\beta;J^{2}(L)) \ \longrightarrow \ \mathcal{Y}_{\delta}(\partial D)\times\mathcal{Y}_{2\pi+\delta}\times L\times J^{2}(L), \] where \[ \mathbf{G}\left((u_1,\zeta),(u_2,\boldsymbol{c}_{2})\right)= \left(\bar\partial_{{\,\mathrm{F}};-\beta}(u_1),\bar\partial_{\beta}(u_2,\boldsymbol{c}_2),\operatorname{ev}(u_1,\zeta),\operatorname{pr}_{J^{2}(L)}(u_{2},\boldsymbol{c}_2)\right). \] This is a Fredholm map of index $2+2n-4n=2-2n$. Consider the preimage of the product of $0$-sections and the subset of $L\times J^{2}(L)$ over $\Delta_L$ with first derivative component in $J^{2}(L)$ equal to $0$. Generically the solution space has dimension $2-2n+2n=2$. The $2$-dimensional automorphism group $G_1$ acts on the holomorphic component and it follows that $\mathcal{N}_{0}$ is a transversely cut out $0$-manifold. For the statement on the second derivative we consider instead the inverse image of the product of $0$-sections and the subset of $L\times J^{2}(L)$ over $\Delta_L$ with both first and second derivative equal to $0$. Here the formal dimension equals $2-n$ and we conclude that the locus is generically empty, as claimed. \end{proof} Similarly, define \begin{equation}\label{Eq:1stder=0'} \mathcal{T}_{0}=\{\xi\in\mathcal{T}\colon du_2(1)=0\} \end{equation} The corresponding result in this situation is the following: \begin{Lemma}\label{Lem:singTT} For a generic Hamiltonian and almost complex structure such that Lemmas \ref{Lem:tvmdli} and \ref{Lem:fiberprod2} hold, $\mathcal{T}_0$ is a transversely cut out $1$-manifold. Furthermore, for any $\xi=((u_1,\zeta),u_2)\in\mathcal{T}_0$, the second derivative of $u_2$ at the marked point satisfies $d^{(2)}u_2(1)\ne 0$. \end{Lemma} \begin{proof} Straightforward modification of the proof of Lemma \ref{Lem:singNN}. \end{proof} \subsection{Gauge fixing} \label{sec:gauge} We describe normal forms for elements in $\mathcal{N}$ near the point of intersection of the two constituent disks. Consider a pair $((u_1,\zeta),u_2)\in\mathcal{F}^{\ast}(\beta)\times\mathcal{M}^{\ast}(\beta)$ that contributes to $\mathcal{N}$, i.e.~such that $u_1(\zeta)=u_2(1)$. In holomorphic coordinates $(\mathbb{C}^{n},\mathbb{R}^{n})$ centered at $u_1(\zeta)\in L\subset\mathbb{C}^{n}$, the disk $u_2$ has a Fourier expansion of the following form, for $z\in[\rho,\infty)\times[0,1]$: \begin{equation}\label{Eq:normalform} u_{2}(z)=\sum_{n>0} c_n e^{-n\pi z}, \end{equation} where $c_n\in\mathbb{R}^{n}$. If $((u_1,\zeta),u_2)\in\mathcal{N}_0$ then $c_1=0$ and $c_2\ne 0$ and if $((u_1,\zeta),u_2)$ lies outside a fixed neighborhood of $\mathcal{N}_0$ then $|c_1|>\delta>0$ for some $\delta$. Consider small nested balls $B(u_1(\zeta);\epsilon)\subset B(u_1(\zeta);2\epsilon)$ around $u_1(\zeta)$ in $L$ and a holomorphic disk $u_2\in \mathcal{M}^{\ast}(\beta)$ represented by $\hat u_2\colon (H,\partial H)\to(\mathbb{C}^{n},L)$ with $\hat u_2(\infty)=u_1(\zeta)$. Then for any parameterization $\hat w_2\colon (H,\partial H)\to(\mathbb{C}^{n},L)$ of a holomorphic disk $w_2\in\mathcal{M}^{\ast}(\beta)$ in a small neighborhood of $u_2$, there is a unique smallest interval $I_{2\epsilon}\subset\partial H$ containing the puncture $\infty$ such that $\hat w_2(I_{2\epsilon})\subset B(2\epsilon;u_1(\zeta))$. We consider points in $I_{2\epsilon}$ which map to $\partial B(\epsilon;u_1(\zeta))$ and we call them $(\epsilon,(u_1,\zeta))$-points, where $(u_1,\zeta)\in\mathcal{F}^{\ast}(-\beta)$. Fix a metric on the closed manifold $\mathcal{M}^{\ast}(\beta)$ and write $V(u_2;\delta)$ for the $\delta$-neighborhood of $u_2\in\mathcal{M}^{\ast}(\beta)$. Lemma \ref{Lem:singNN} implies that $\mathcal{N}_0$ is a compact $0$-manifold and thus $\operatorname{pr}_2^{-1}(\mathcal{N}_0)$ is a finite set. We write \[ \operatorname{pr}_2^{-1}(\mathcal{N}_0)=\{u_2^1,\dots u_2^{m}\} \] and for $\delta>0$ we let \[ V(\delta)=\bigcup_{j=1}^{m} V(u_2^{j};\delta). \] \begin{Lemma}\label{Lem:interswnbhd} For $\epsilon>0$ sufficiently small, there exists $\delta_1>\delta_0>0$ such that $V(u_2^{j};\delta_1)$, $j=1,\dots,m$ are mutually disjoint and such that the following hold for all $(u_1,\zeta)\in\mathcal{F}^{\ast}(-\beta)$: \begin{enumerate} \item If $u_2\in \operatorname{pr}_2^{-1}(\mathcal{N})-V(\delta_1)$ and $\operatorname{ev}(u_2)=\operatorname{ev}(u_1,\zeta)$ then $u_2$ has exactly 2 $(\epsilon,(u_1,\zeta))$-points in $I_{2\epsilon}$, one in each component of $I_{2\epsilon}-\{\infty\}$, and the corresponding intersections with $\partial B(u_2(\zeta);\epsilon)$ are transverse. \item If $u_2\in V(\delta_1)-V(\delta_0)$ and $\operatorname{ev}(u_2)=\operatorname{ev}(u_1,\zeta)$, then there are 4 $(\epsilon,(u_1,\zeta))$-points in $I_{2\epsilon}$, one in one of the components of $I_{2\epsilon}-\{\infty\}$ and three in the other, and the corresponding intersections with $\partial B(u_2(\zeta);\epsilon)$ are transverse. \item If $u_2\in V(\delta_0)$ for some $j=1,\dots,m$ and $\operatorname{ev}(u_2)=\operatorname{ev}(u_1,\zeta)$, then there are 2, 3, or 4 $(\epsilon,(u_1,\zeta))$-points in $I_{2\epsilon}$, one in one of the components of $I_{2\epsilon}-\{\infty\}$ and all others in the other component, and the two intersection points with $\partial B(u_1(\zeta);\epsilon)$ most distant from $\infty$ are transverse. \end{enumerate} Note also $\delta_1\to 0$ as $\epsilon\to 0$. \end{Lemma} \begin{proof} This follows immediately from \eqref{Eq:normalform}. \end{proof} Fix $0<\delta_0<\delta_1$ such that Lemma \ref{Lem:interswnbhd} holds, and consider the following two open sets of $\mathcal{N}$: \begin{align} U^{\rm reg} &=\left\{\xi=((u_1,\zeta),u_2)\in\mathcal{N}\colon u_2\notin V(\delta_0)\right\},\\ U^{\rm sing} &=\left\{\xi=((u_1,\zeta),u_2)\in\mathcal{N}\colon u_2\in V(\delta_1)\right\}. \end{align} Using the exponential map $\exp\colon TL\to L$ of a Riemannian metric on $L$, we may identify the disks $B(u_1(\zeta);\epsilon)$ with $\epsilon$-disks in $T_{u(\zeta)}L$. In particular, we may consider the $(\epsilon,(u_1,\zeta))$-points at $\xi=((u_1,\zeta),u_2)\in\mathcal{N}$ as points in the fiber over $\xi$ in the pull-back bundle \[ \operatorname{ev}^{\ast} TL \ \longrightarrow \mathcal{N}, \] where $\operatorname{ev}\colon\mathcal{N}\to L$, $\operatorname{ev}((u_1,\zeta),u_2)=u_1(\zeta)=u_2(1)$. With this convention we get the following sections of this bundle: \begin{itemize} \item The $(\epsilon, (u_2,\zeta))$-point in the component of $I_{2\epsilon}-\{\infty\}$ to the left (right) of $\infty$ which is closest to $\infty$ gives a $C^{1}$-smooth section $q^{\rm reg}_{-}$ ($q^{\rm reg}_{+}$) of the bundle $\operatorname{ev}^{\ast} TL$ over $U^{\rm reg}$; \item The $(\epsilon, (u_2,\zeta))$-point in the component of $I_{2\epsilon}-\{\infty\}$ to the left (right) of $\infty$ which is farthest from $\infty$ gives a $C^{1}$-smooth section $q^{\rm sing}_{-}$ ($q^{\rm sing}_{+}$) of the bundle $\operatorname{ev}^{\ast} TL$ over $U^{\rm sing}$. \end{itemize} We use these families to fix parametrizations for the $\mathcal{M}^{\ast}(\beta)$-component of elements in $\mathcal{N}$. If $\xi\in U^{\rm reg}\subset\mathcal{N}$, $\xi=((u_1,\zeta),u_2)$ then we let $u_2^{\rm reg}\colon H\to\mathbb{C}^{n}$ denote the representative of $u_2$ such that $u_2^{\rm reg}(\pm 1)=q^{\rm reg}_{\mp}$. Similarly if $\xi\in U^{\rm sing}\subset\mathcal{N}$, $\xi=((u_1,\zeta),u_2)$ then we let $u_2^{\rm sing}\colon H\to\mathbb{C}^{n}$ denote the representative of $u_2$ such that $u_2^{\rm sing}(\pm 1)=q^{\rm sing}_{\mp}$. In order to define uniquely parametrizations over all of $\mathcal{N}$, we interpolate between $u_{2}^{\rm reg}$ and $u_{2}^{\rm sing}$ in $U^{\rm reg}\cap U^{\rm sing}$. To this end, note that if $((u_1,\zeta),u_2)\in U^{\rm reg}\cap U^{\rm sing}$ then $u_2$ lies in one of the disjoint annular regions $V(u_2^{j};\delta_1)- V(u_2^{j},\delta_0)$, $j=1,\dots,m$. Let $\delta(u_2)\in(\delta_0,\delta_1)$ denote the distance between $u_2$ and $u_2^{j}$; this defines a smooth function \[ \delta\colon V(\delta_1)-V(\delta_0)\to (\delta_0,\delta_1). \] By Lemma \ref{Lem:interswnbhd}, if $((u_1,\zeta),u_2)\in \mathcal{N}$ and $u_2\in V(\delta_1)-V(\delta_0)$ then there are exactly 4 $(\epsilon,(u_1,\zeta))$-points in $I_{2\epsilon}$, each corresponding to a transverse intersection with $\partial B(u_1(\zeta);\epsilon)$. Furthermore, one component of $I_{2\epsilon}-\{\infty\}$ contains 1 such point and the other component contains 3. Using the parameterization $u^{\rm reg}_{2}\colon H\to\mathbb{C}$ we think of $I_{2\epsilon}$ as an interval in $\partial H$ containing $\infty$. \begin{enumerate} \item In the case when the component with 1 intersection lies in the negative direction from $\infty$, we get the following intersection points in $I_{2\epsilon}$ (listed in the order induced by the orientation of $I_{2\epsilon}$), together with $\infty$: \[ 1 \ , \ \infty \ , \ -1 \ , \ \zeta_1 \ , \ \zeta_2. \] Here $1$ corresponds to $q^{\rm reg}_{-}=q^{\rm sing}_{-}$, $-1$ to $q^{\rm reg}_{+}$, and $\zeta_2$ to $q^{\rm sing}_{+}$. \item In the case when the component with 1 intersection lies in the positive direction from $\infty$, we get the following intersection points in $I_{2\epsilon}$ (listed in the order induced by the orientation of $I_{2\epsilon}$), together with $\infty$: \[ \zeta_1 \ , \ \zeta_2 \ , \ 1 \ , \ \infty \ , \ -1. \] Here $1$ corresponds to $q^{\rm reg}_{-}$, $\zeta_1$ to $q^{\rm sing}_{-}$, and $-1$ to $q^{\rm reg}_{+}=q^{\rm sing}_{+}$. \end{enumerate} Let $\alpha\colon [\delta_0,\delta_1]\to [0,1]$ be a smooth increasing function equal to $0$ near $\delta_0$ and equal to $1$ near $\delta_1$. For $\xi=((u_1,\zeta),u_2)\in U^{\rm reg}\cap U^{\rm sing}$, define the automorphism \[ \psi[u_2]\colon H \to H \] as follows: \begin{enumerate} \item In case (1) above, with 1 intersection in the negative direction from $\infty$, $\psi[u_2]$ satisfies \begin{equation}\label{Eq:interpol-} \psi[u_2](1)=1 \ , \ \psi[u_2](\infty)=\infty \ , \ \psi[u_2](-1)= \alpha(\delta(u_2))(-1) +(1-\alpha(\delta(u_2)))\zeta_2. \end{equation} \item In case (2) above, with 1 intersection in the positive direction from $\infty$, $\psi[u_2]$ satisfies \begin{equation}\label{Eq:interpol+} \psi[u_2](1)=\alpha(\delta(u_2))1+(1-\alpha(\delta(u_2)))\zeta_1 \ , \ \psi[u_2](\infty)=\infty \ , \ \psi[u_2](-1)= -1. \end{equation} \end{enumerate} Furthermore, define \[ u_{2}^{\rm st}\colon H\to\mathbb{C}^{n},\quad u_{2}^{\rm st}= u_2^{\rm reg}\circ \psi[u_2]. \] Then $u_{2}^{\rm st}= u_{2}^{\rm reg}$ if $\delta(u_2)$ lies in a neighborhood of $\delta_1$ and $u_{2}^{\rm st}=u_{2}^{\rm sing}$ if $\delta(u_2)$ lies in a neighborhood of $\delta_0$; we extend the family of parametrizations to all of $\mathcal{N}$ by declaring that if $\xi=((u_1,\zeta),u_2)\in U^{\rm reg}- U^{\rm sing}$ then $u_2^{\rm st}= u_2^{\rm reg}$ and if $\xi=((u_1,\zeta),u_2)\in U^{\rm sing}- U^{\rm reg}$ then $u_2^{\rm st}= u_2^{\rm sing}$. It is straightforward to compare the various parametrizations: if $u_2\in V(\delta_1)- V_1(\delta_0)$ then we have three parametrizations $u_2^{\rm st}$, $u_{2}^{\rm reg}$, and $u_2^{\rm sing}$. In the compact part of $H$, $H-E(R_0)$ for some large but fixed $R_0$, any two of these parametrizations differ by a diffeomorphism, whilst it follows from \eqref{Eq:movemarked1} and \eqref{Eq:movemarked2} that there are functions $k^{\rm reg}$ and $h^{\rm reg}$, $k^{\rm sing}$, and $h^{\rm sing}$ such that for $z\in[\rho_0,\infty)\times[0,1]\approx E(R_0)$ \begin{align*} u_2^{\rm reg}(z) &=u_2^{\rm st}\left(z+k^{\rm reg}[u_2]+h^{\rm reg}[u_2](z)\right),\\ u_2^{\rm sing}(z) &=u_2^{\rm st}\left(z+k^{\rm sing}[u_2]+h^{\rm sing}[u_2](z)\right), \end{align*} where the constants $k^{\rm reg}[u_2]=\mathcal{O}(1)$ and $k^{\rm sing}[u_2]=\mathcal{O}(1)$, i.e.~are uniformly bounded in $U^{\rm reg}\cap U^{\rm sing}$, and where similarly $h^{\rm reg}[u_2](z)=\mathcal{O}(e^{-\pi z})$ and $h^{\rm sing}[u_2](z)=\mathcal{O}(e^{-\pi z})$. In complete analogy with the above we find an open cover $W^{\rm reg}\cup W^{\rm sing}$ of $\mathcal{T}$ that restricts to $U^{\rm reg}\cup U^{\rm sing}$ over the boundary $\mathcal{N}$. Here $W^{\rm sing}$ is a small neighborhood of $\mathcal{T}_{0}$ and $W^{\rm reg}$ the complement of a slightly smaller such neighborhood. Exactly as for $U^{\rm reg}\cup U^{\rm sing}$ we use $W^{\rm reg}\cup W^{\rm sing}$ to produce stable parametrizations $u_2^{\rm st}$ for the holomorphic disk components of elements in the fibered product \[ \mathcal{D}\times_L\mathcal{M}^{\ast}(\beta), \] that agree with the stable parametrizations defined above along the boundary $\mathcal{N}=\partial \mathcal{T}$. \subsection{Marked points on disks at the boundary of $\mathcal{F}(0\beta)$} In order to control marked points under bubbling we will use auxiliary hypersurfaces that are Poincar{\'e} dual to the generator $\beta$ of $H_{1}(L)$. Consider a map $u_1\in\mathcal{F}(-\beta)$. Then the $C^{1}$-map $\operatorname{ev}(u_1,\cdot)\colon\partial D\to L$ represents the homology class $-\beta$. Let $P$ be a hypersurface Poincar{\'e} dual to $\beta$. After small perturbation, $P$ is transverse to $\operatorname{ev}(u_1,\cdot)$ and the signed count of intersection points in $\operatorname{ev}(u_1,\cdot)^{-1}(P)$ equals $-1$. Noting that $P$ is orientable, let $P'$ be a small shift of $P$ along a normal vector field. For sufficiently small shift, $P'$ is also transverse to $\operatorname{ev}(u_1,\cdot)$. By compactness of $\mathcal{F}^{\ast}(-\beta)$ there is a finite collection of such hypersurfaces $P_{1},\dots,P_{m}$, with parallels $P_1',\dots, P_m'$, and an open cover $\mathcal{F}^{\ast}(-\beta)=U_1\cup U_2\cup\dots\cup U_m$ such that if $(u_1,\zeta)\in U_j$ then $\operatorname{ev}(u_1,\cdot)$ is transverse to $P_j\cup P_j'$ and $u_1(\zeta)\notin P_j\cup P_j'$. Let $\pi\colon\mathcal{F}(0\beta)\to[0,\infty)$ denote the natural projection and recall that $\pi(\mathcal{F}(0\beta))$ is contained in a compact subset. Consider a sequence of maps $u_j\colon (D,\partial D)\to(\mathbb{C}^{n},L)$, $j=1,2,\dots$ in $\mathcal{F}(0\beta)$. By the Arzela-Ascoli theorem, if $|du_j|$ is uniformly bounded, then $u_{j}$ has a convergent subsequence $u_{j_k}$ with limit $u$ which solves the Floer equation $(du+\gamma_R\otimes X_{H_R})^{0,1}=0$ for $R=\lim_k\pi(u_{j_k})$. Consequently, for any $M>0$ the open set \[ U_M=\{u\in\mathcal{F}(0\beta)\colon \sup_{z\in D}|du(z)|>M\} \] is a neighborhood of the Gromov-Floer boundary of $\mathcal{F}(0\beta)$. For $M>0$ and $u\in\mathcal{F}(0\beta)$, let $B^{(1)}(u,M)=|du|^{-1}([M,\infty))\subset D$ and let $\delta(u,M)=\operatorname{diam}(B^{(1)}(u,M))$. \begin{Lemma}\label{Lem:diamblowup} If $\epsilon_M=\sup_{u\in U_M}\delta(u,M)$ then $\epsilon_M\to 0$ as $M\to\infty$. \end{Lemma} \begin{proof} Assume that the statement is false. Then there is a sequence of disks in $\mathcal{F}(0\beta)$ with bubbles forming at (at least) two points. This contradicts Lemma \ref{Lem:bubbles}. \end{proof} It follows from Lemma \ref{Lem:diamblowup} that for any $\delta_0>0$ there exists $M>0$ such that for every $u\in U_M$, $\delta(u,M)<\delta_0$. Let $I_u\subset\partial D$ denote the smallest interval containing $\partial D\cap B^{(1)}(u,M)$ and let $\zeta_u$ denote the mid point of $I_u$. \begin{Lemma}\label{Lem:blowuppointsinP} For all sufficiently large $M>0$, if $u\in U_M$ then for some $P_j$, $j\in\{1,\dots,m\}$, there are points $\zeta,\zeta'\in I_u$ such that $u(\zeta)\in P_j$ and $u(\zeta')\in P_j'$. \end{Lemma} \begin{proof} Assume that the lemma does not hold, then there is a sequence of maps $u_l\in U_{M_l}$, $M_l\to\infty$ as $l\to\infty$ such that $u_l(I_{u_l})\cap P_j=\varnothing$ or $u_l(I_{u_l})\cap P_j'=\varnothing$ for all $j=1,\dots,m$. But, by Gromov-Floer compactness and Lemma \ref{Lem:bubbles}, $(u_l,\zeta_{u_l})$ has a subsequence that converges to some $(\hat u_1,\zeta)\in\mathcal{F}^{\ast}(-\beta)$ uniformly on compact subsets of $D-\zeta$. Now, $(\hat u_1,\zeta)\in U_j$ for some $j\in\{1,\dots,m\}$ and hence $\hat u_1|_{\partial D}$ intersects $P_j$ and $P_j'$ transversely with intersection number $-1$, and $\hat u_1(\zeta)\notin P_j\cup P_j'$. Since the intersection numbers of $u_l|_{\partial D}$ with $P_j$ and with $P_j'$ equal $0$ we find that $u_l(I_{u_l})\cap P_j\ne \varnothing$ and $u_l(I_{u_l})\cap P_j'\ne \varnothing$ for $l$ large enough and $u_l$ in the subsequence. This contradicts the assumption and the lemma follows. \end{proof} Let $H_{\delta}\subset H$ be a half disk parameterizing a small neighborhood in $D$ around $\zeta_u$. Let $u\in U_M$, fix $P_j$ as in Lemma \ref{Lem:blowuppointsinP}, and let $\zeta',\zeta$ be (ordered) points in $\partial H_{\delta}$ such that $u(\zeta)\in P_j$ and $u(\zeta')\in P'_j$ (or vice versa). Let $a=\frac{\zeta'-\zeta}{2}$ and $b=\frac{\zeta+\zeta'}{2}$; then the map $\psi\colon H\to H$, $\psi(z)=az+b$ satisfies $\psi(-1)=\zeta$ and $\psi(1)=\zeta'$. Lemma \ref{Lem:diamblowup} implies that $a,b\to 0$ as $M\to\infty$. Let $\Omega_\rho=\psi^{-1}(H_{\delta})\subset H$. \begin{Lemma}\label{Lem:limit1} For any sequence of maps $u_l\in U_{M_l}\subset \mathcal{F}(0\beta)$ with $M_l\to\infty$, there exists $P_j$ such that the sequence $u_{l}\circ\psi\colon\Omega_\rho\to\mathbb{C}^{n}$ has a subsequence that converges uniformly on compact subsets to a holomorphic disk $u\colon (H,\partial H)\to(\mathbb{C}^{n},L)$ with $u(1)\in P_j'$ and $u(-1)\in P_j$, which represents an element in $\mathcal{M}^{\ast}(\beta)$ with marked point at $\infty\in H$. \end{Lemma} \begin{proof} In case the derivative of $u_j\circ\psi$ is uniformly bounded we can extract a convergent subsequence, with limit which must be non-constant since the hypersurfaces $P_j$ and $P_j'$ are disjoint. The lemma then follows from Lemma \ref{Lem:bubbles}. Assume next that the derivative is not uniformly bounded. Then we have bubbling at at least one point $\xi\in\partial H$, and recover a non-constant holomorphic disk from there. Lemma \ref{Lem:bubbles} implies that this is the only bubble and hence $u_j\circ\psi$ must converge to a constant map on compact subsets of $H-\xi$. This however contradicts $P_j$ and $P_j'$ being disjoint, so we conclude that the derivative is uniformly bounded. \end{proof} \begin{Corollary}\label{Cor:limit2} For each $\delta_0>0$, $\epsilon_0>0$, and $\epsilon>0$, there exists $M>0$ and $\rho_0>0$ such that for every $u\in U_M\subset\mathcal{F}(0\beta)$ there exists $((u_1,\zeta),u_2)\in \mathcal{N}$ such that the following holds: \begin{enumerate} \item The $C^{1}$-distance between $u|_{D-B(\zeta;\delta_0)}$ and $u_1|_{D-B(\zeta;\delta_0)}$ is smaller than $\epsilon_0$, where $B(\zeta;\delta_0)$ denotes a $\delta_0$-neighborhood of $\zeta$ in $D$. \item There exists $j$ such that $(u_1, \zeta) \in U_j$ and a parameterization $u_2\colon H\to\mathbb{C}^{n}$ with $u_2(1)\in P_j'$ and $u_2(-1)\in P_j$ such that $u\circ\psi$ is defined on $H_{\rho_0}$ and lies at $C^{1}$-distance at most $\epsilon_0$ from $u_2|_{H_{\rho_0}}$. Furthermore, if $((u_1,\zeta),u_2)\in U^{\rm reg}$ (resp.~$((u_1,\zeta),u_2)\in U^{\rm sing}$) then the two $(\epsilon,(u_1,\zeta))$-points corresponding to $q^{\rm reg}_{\pm}$ (resp.~to $q^{\rm sing}_{\pm}$) lie in $u_2(\partial H_{\rho_0})$. \end{enumerate} \end{Corollary} \begin{proof} Assuming that $(1)$ or the first part of $(2)$ does not hold we extract a sequence contradicting Lemma \ref{Lem:limit1}. For the last part of $(\mathrm{b})$, observe that when parametrized as above, the points where $u_2\colon H\to\mathbb{C}^{n}$ intersects $\partial B(u_1(\zeta);\epsilon)$ lie at uniformly finite distance from $\pm 1$, by compactness of $\operatorname{ev}^{-1}(u_1(\zeta))\subset\mathcal{M}^{\ast}(\beta)$. \end{proof} Corollary \ref{Cor:limit2} allows us to extract subsequences with bubbles that limit to maps of the form $u^{\rm st}_2$. More precisely, given $u$ as above, if $u_2\in U^{\rm reg}$ (resp.~if $u_2\in U^{\rm sing}$) then Corollary \ref{Cor:limit2} enables us to find the transverse intersection points with $\partial B(u_1(\zeta);\epsilon)$ corresponding to $q^{\rm reg}_{\pm}$ (resp.~to $q^{\rm sing}_{\pm}$). This allows us to define maps $\phi^{\rm st}\colon H\to H$, of the form $z\mapsto az+b$, which take $\pm 1$ to the intersection points corresponding to $q^{\rm reg}_{\pm}$ if $((u_1,\zeta),u_2)\in U^{\rm reg} - U^{\rm sing}$, to the intersection points corresponding to $q^{\rm sing}_{\pm 1}$ if $((u_1,\zeta),u_2)\in U^{\rm sing}- U^{\rm reg}$, and to the interpolation between intersection points determined by $\delta(u_2)$ as in \eqref{Eq:interpol-} and \eqref{Eq:interpol+} if $((u_1,\zeta),u_2)\in U^{\rm reg}\cap U^{\rm sing}$. \begin{Corollary}\label{Cor:limit3} Corollary \ref{Cor:limit2} holds with $(2)$ replaced by \begin{itemize} \item[$(2')$] The map $u\circ\phi^{\rm st}$ is defined on $H_{\rho_0}$ and lies at $C^{1}$-distance at most $\epsilon_0$ from $u_2^{\rm st}|_{H_{\rho_0}}$. \end{itemize} \end{Corollary} \begin{proof} Noting that the coefficients $a$ and $b$ in the maps $\psi$ and $\phi^{\rm st}$ differ by a uniformly bounded amount, this follows immediately from Corollary \ref{Cor:limit2}. \end{proof} \section{Gluing} \label{Sec:gluing2} In this section we define a Floer gluing map \[ \Phi\colon \mathcal{N}\times[0,\infty)=\mathcal{F}^{\ast}(-\beta)\times_{L}\mathcal{M}^{\ast}(\beta)\times[0,\infty)\to \mathcal{F}(0\beta) \] following Floer's classical gluing scheme, and show that this map gives a $C^{1}$-parametrization of a neighborhood of the Gromov-Floer boundary of $\mathcal{F}(0\beta)$. \subsection{The Floer-Picard lemma with parameters} Our main tool for gluing is the following result. Let $T$ and $M$ be finite dimensional smooth manifolds and let $\pi_{X}\colon X\to M\times T$ be a smooth bundle of Banach spaces over $M\times T$. Let $\pi_{B}\colon B\to T$ be a smooth bundle of Banach spaces over $T$. Let $f\colon X\to B$ be a smooth bundle map of bundles over $T$ and write $f_{t}\colon X_{t}\to B_{t}$ for the restriction of $f$ to the fiber over $t\in T$ (where the fiber $X_t$ is the bundle over $M$ with fiber $X_{(m,t)}$ at $m\in M$). If $m\in M$, then write $d_{m}f_{t}\colon X_{(m,t)}\to B_{t}$ for the differential of $f_{t}$ restricted to the vertical tangent space of $X_{t}$ at $0\in X_{(m,t)}$, where this vertical tangent space is identified with the fiber $X_{(m,t)}$ itself; similarly, the tangent spaces of $B_{t}$ are identified with $B_{t}$ using linear translations. Denote by $0_{B}$ the $0$-section in $B$, and by $D(0_X;\epsilon)$ an $\epsilon$-disk sub-bundle of $X$. \begin{Lemma}\label{Lem:FloerPicard} Let $f\colon X\to B$ be a smooth Fredholm bundle map of $T$-bundles, with Taylor expansion in the fiber direction: \begin{equation}\label{e:TaylorFP} f_{t}(x)=f_{t}(0)+d_{m}f_{t}\,x+N_{(m,t)}(x),\quad\text{where }\; 0,x\in X_{(m,t)}. \end{equation} Assume that $d_{m}f_{t}$ is surjective for all $(m,t)\in M\times T$ and has a smooth family of uniformly bounded right inverses $Q_{(m,t)}\colon B_t\to X_{(m,t)}$, and that the non-linear term $N_{(m,t)}$ satisfies a quadratic estimate of the form \begin{equation}\label{e:QuadraticFP} \|N_{(m,t)}(x)-N_{(m,t)}(y)\|_{B_t}\le C\|x-y\|_{X_{(m,t)}}(\|x\|_{X_{(m,t)}}+\|y\|_{X_{(m,t)}}), \end{equation} for some constant $C>0$. Let $\ker(df_{t})\to M$ be the vector bundle with fiber over $m\in M$ equal to $\ker(d_mf_{t})$. If $\|Q_{(m,t)}f_{t}(0)\|_{X_{(m,t)}}\le\frac{1}{8C}$, then for $\epsilon<\frac{1}{4C}$, $f^{-1}(0_{B})\cap D(0_{X};\epsilon)$ is a smooth submanifold diffeomorphic to the bundle over $T$ with fiber at $t\in T$ the $\epsilon$-disk bundle in $\ker(df_{t})$. \end{Lemma} \begin{proof} The proof for the case when $M$ and $T$ are both points appears in Floer \cite{Floer:mem} and generalizes readily to the case under study here. We give a short sketch pointing out some features that will be used below. Let $K_{(m,t)}=\ker(d_mf_{t})$ and choose a smooth splitting $X_{(m,t)}=X_{(m,t)}'\oplus K_{(m,t)}$ with projection $p_{(m,t)}\colon X_{(m,t)}\to K_{(m,t)}$. For $k_{(m,t)}\in K_{(m,t)}$, define the bundle map $\widehat{f}_{(m,t)}\colon X_{(m,t)} \to B_{t}\oplus K_{(m,t)}$, \[ \widehat{f}_{(m,t)}(x)=\bigl(f_t(x)\;,\;p_{(m,t)}\,x-k_{(m,t)}\bigr). \] Then solutions to the equation $f_t(x)=0$, $x\in X_{(m,t)}$ with $p_{(m,t)}\,x=k_{(m,t)}$ are in one-to-one correspondence with solutions to the equation $\widehat{f}_{(m,t)}(x)=0$. Moreover, the differential $d\widehat{f}_{(m,t)}$ is an isomorphism with inverse $\widehat{Q}_{(m,t)}\colon B_t\oplus K_{(m,t)}\to X_{(m,t)}$, \[ \widehat{Q}_{(m,t)}\,(b,k)= Q_{(m,t)}\,b\; +\;k. \] On the other hand, solutions of the equation $\widehat{f}_{(m,t)}(x)=0$ are in one-to-one correspondence with fixed points of the map $F_{(m,t)}\colon X_{(m,t)}\to X_{(m,t)}$ given by \[ F_{(m,t)}(x)=x\;-\;\widehat{Q}_{(m,t)}\,\widehat{f}_{(m,t)}(x). \] Fixed points are obtained from the Newton iteration scheme: if \[ v_0=k_{(m,t)},\quad v_{j+1}=v_j\;-\;\widehat{Q}_{(m,t)}\,\widehat{f}_{(m,t)}(v_j), \] then $v_{j}$ converges to $v_\infty$ as $j\to\infty$ and $F_{(m,t)}(v_\infty)=v_{\infty}$. Furthermore, if $\|f_{t}(0_{X_{(t,m)}})\|$ is sufficiently small then there is $0<\delta<1$ such that: \[ \|v_{j+1}-v_j\|\le \delta^{j}\|f_{t}(0)\| \] and consequently \begin{equation}\label{Eq:iterest} \|v_\infty-v_0\|\le \kappa \|f_{t}(0)\|, \end{equation} where $\kappa$ is a constant. \end{proof} \subsection{Pre-gluing maps of disks} \label{sec:pregluemaps} Recall from Section \ref{sec:cohertriv} that we associated to a pair of punctured disks $H_j$, $j=1,2$ and $\rho\in[1,\infty)$ a pre-glued disk $H_{1}\#_{\rho} H_{2}$ and that weight functions $\mathbf{e}_{\delta}$ on $H_j$ induced a weight function on $H_{1}\#_{\rho} H_{2}$. For simpler notation below we will write $D_{\rho}$ for $H_{1}\#_\rho H_{2}$. For $((u_{1},\zeta),u_{2})\in\mathcal{N}$ and $\rho\in[\rho_0,\infty)$ define the map \[ w_{\rho}=u_1\#_\rho u_2\colon D_{\rho}\to\mathbb{C}^{n} \] as follows. For $\rho_0$ large enough $u_1$ (resp.~$u_2^{\rm st}$) takes the region $(-\infty,-\rho_0]\times[0,1]$ (resp.~$[\rho_0,\infty)\times[0,1]$) to the holomorphic $(\mathbb{C}^{n},\mathbb{R}^{n})$-coordinates around $u_1(\zeta)\in L$. Define \[ u_1\#_\rho u_2(z)= \begin{cases} u_1(z) &\text{for }z\in H_1-\bigl((-\infty,-\rho+1)\times[0,1]\bigr) ,\\ u_2^{\rm st}(z) &\text{for }z\in H_2-\bigl((\rho+1,\infty)\times[0,1]\bigr),\\ (1-\alpha(z))u_1(z)+\alpha(z)u_2(z) &\text{for }z\in [-1,1]\times[0,1], \end{cases} \] where all domains are subsets of $D_{\rho}=H_1\#_\rho H_2$. Here the domains of the first and second maps are clear. The third domain should be understood as \[ [-1,1]\times[0,1]\subset [-\rho,\rho]\times[0,1]\subset D_\rho, \] see \eqref{Eq:midstrip}, and $\alpha$ in the definition of the map is a smooth cut off function equal to $0$ near $\tau=-1$, equal to $1$ near $\tau=1$, and real valued and holomorphic on the boundary $[-1,1]\times\{0,1\}= (\partial D_{\rho})\cap [-1,1]\times[0,1]$. \subsection{Adapted configuration spaces} A pre-glued domain $D_{\rho}$ contains a middle strip $[-\rho,\rho]\times[0,1]\subset D_{\rho}$, see \eqref{Eq:midstrip}. Write $0\in D_{\rho}$ for the point $(0,0)\in[-\rho,\rho]\times[0,1]$. Let $\widehat{\mathcal{X}}$ denote the bundle over $L \times [\rho_0,\infty)$ the fiber of which over $(\rho,q)$ is the subset \[ \widehat{\mathcal{X}}(\rho,q)\subset\mathcal{H}^{2}_{\delta}(D_{\rho},\mathbb{C}^{n}) \] of functions $u\colon D_{\rho}\to\mathbb{C}^{n}$ that satisfy the following: \begin{enumerate} \item $u$ takes the boundary $\partial D_{\rho}$ to $L$, the restriction of $(du)^{0,1}$ to the boundary (the trace) vanishes, and $u|_{\partial D_{\rho}}$ represents the zero homology class. \item $u(0)=q\in L$. \end{enumerate} We view $\widehat{\mathcal{X}}$ as a bundle over $L\times[\rho_0,\infty)$ with local trivializations in the $L$-directions given by composition with cut-off translation diffeomorphisms, as in \eqref{Eq:trivviadiffeo}, and with trivializations in the $[\rho_0,\infty)$-direction given by pre-composition with diffeomorphisms $\psi\colon D_{\rho}\to D_{\rho'}$ such that $\psi(0)=0$, $\psi=\mathrm{id}$ outside the middle strip, and inside the middle strip $\psi$ is close to a diffeomorphism that stretches/shrinks the $[-\rho,\rho]\times[0,1]$ in the first coordinate and which is holomorphic along the boundary. Each fiber $\widehat{\mathcal{X}}_{\rho}$ of $\widehat{\mathcal{X}}\to[\rho_{0},\infty)$ fibers over $L$. The vertical tangent bundle of $\widehat{\mathcal{X}}\to[\rho_0,\infty)$ can be described as the bundle with fiber over $u\in\widehat{\mathcal{X}}_{\rho}$ equal to the direct sum \[ T_{u}\widehat{\mathcal{X}}_{\rho}=\mathcal{E}(u)\oplus V_{\rm sol}(u). \] Here $\mathcal{E}(u)\subset\mathcal{H}^{2}_{\delta}(D_{\rho})$ is the subspace of vector fields $v$ along $u$ that satisfy \begin{enumerate} \item $v(z)\in T_{u(z)}L$ and $(\nabla v)^{0,1}|_{\partial D_{\rho}}=0$ (where $\nabla$ is the connection associated to the metric $\hat g$, see Section \ref{sec:Geomprel}), \item $v(0,0)=0$, \end{enumerate} and $V_{\rm sol}(u) \cong \mathbb{R}^n$ is the $n$-dimensional space of cut-off constant solutions coming from the values of the holomorphic coordinates at $u(0)$ (i.e. the space arising from linearizations of translation diffeomorphisms). There is an exponential map $\exp_{u}\colon\mathcal{E}(u)\oplus V_{\rm sol}(u)\to\widehat{\mathcal{X}}$ which can be written \[ \exp_{u}(v,c)=\Phi_{c}(\operatorname{Exp}_{u}(v)), \] where $\Phi_{c}\colon \mathbb{C}^{n}\to\mathbb{C}^{n}$ is the translation diffeomorphism associated to $c\in\mathbb{R}^{n}$ and where $\operatorname{Exp}$ is defined through the metric $\hat g$, which gives $C^{1}$-charts of $\widehat{\mathcal{X}}_{\rho}$. \begin{Remark} These charts vary in a $C^{1}$-smooth way with $\rho\in[\rho_0,\infty)$. Indeed, the local trivialization over the base is simply precomposition with a diffeomorphism $\psi\colon D_{\rho}\to D_{\rho'}$ that is holomorphic along the boundary, and the exponential map above is equivariant under precomposition with $\psi$: \[ \exp_{u\circ\psi}(v\circ\psi, c)=\left(\exp_{u}(v,c)\right)\circ\psi. \] \end{Remark} We will use $\widehat{\mathcal{X}}$ as the source space for the Floer operator. We denote the naturally corresponding target space $\widehat{\mathcal{Y}}$. This space is a locally trivial bundle over $[\rho_0,\infty)$ with fiber at $\rho$ equal to \[ \dot\mathcal{H}^{1}_{\delta}(D_{\rho},\mathrm{Hom}^{0,1}(TD_{\rho},\mathbb{C}^{n})), \] the subset of elements $A\in\mathcal{H}^{1}_{\delta}(D_{\rho},\mathrm{Hom}^{0,1}(TD_{\rho},\mathbb{C}^{n}))$ with $A|_{\partial D_{\rho}}=0$, and with local trivializations given by precomposition with the differentials $d\psi$ of the diffeomorphisms $\psi\colon D_{\rho}\to D_{\rho'}$ described above. \subsection{Pre-gluing as a map and the Floer operator} Consider the product $\widehat{\mathcal{X}}\times[0,\infty)$ and define the map \[ \operatorname{PG}\colon\mathcal{N}\times[\rho_0,\infty)\to\widehat{\mathcal{X}}\times[0,\infty) \] as follows (recall that $\mathcal{N}=\mathcal{F}^{\ast}(-\beta)\times_L\mathcal{M}^{\ast}(\beta)$): \[ \operatorname{PG}(\xi,\rho)=(u_1\#_{\rho} u_{2},r(u_1)), \] where $\xi=((u_1,\zeta),u_2)$ and $r(u_1)$ is the coordinate of the Hamiltonian at $u_1$, i.e.~$u_1$ solves the Floer equation $(du_1+\gamma_{r(u_1)}\otimes X_H)^{0,1}=0$. By compactness of the moduli spaces involved we find that if $\rho_{0}$ is sufficiently large then $\operatorname{PG}$ is a fiber preserving embedding (as a map of bundles over $[\rho_0,\infty)$). More precisely, the restriction \[ \operatorname{PG}_{\rho}=\operatorname{PG}|_{\mathcal{N}\times\{\rho\}}\colon \mathcal{N}\to\mathcal{X}_{\rho} \] is a family of embeddings which depends smoothly on $\rho$. Consider the normal bundle $N\mathcal{N}_{\rho}$ of $\mathcal{N}_{\rho}=\operatorname{PG}_{\rho}(\mathcal{N})\subset\widehat{\mathcal{X}}_{\rho}$ as a sub-bundle of the restriction $T_{\mathcal{N}_{\rho}}\widehat{\mathcal{X}}_{\rho}$ of the tangent bundle of $\widehat{\mathcal{X}}_{\rho}$ to $\mathcal{N}_{\rho}$. The fiber $N_{w_{\rho}}\mathcal{N}_{\rho}$ of this normal bundle at $w_{\rho}=u_{1}\#_{\rho} u_{2}\in\widehat{\mathcal{X}}_{\rho}$ is the $L^{2}$ complement, see Remark \ref{Rem:L2pairing} below, of the subspace $d\operatorname{PG}(T_{\xi}\mathcal{N})\subset T_{w_\rho}\widehat{\mathcal{X}}_{\rho}$, where $\xi=((u_1,\zeta),u_2)\in\mathcal{N}$. By transversality and a standard linear gluing result, see Lemma \ref{Lem:lingluetriv} below, $d\operatorname{PG}(T_{\xi}\mathcal{N})= T_{w_{\rho}}\mathcal{N}_{\rho}$ is given by \[ T_{w_{\rho}}\mathcal{N}_{\rho}=\ker(u_1)'\times_{T_{u_1(\zeta)}L}\ker(u_2)', \] where $\ker(u_i)'$ is spanned by cut-off versions of vector fields of the form $v'+c'$ where $v+c$ lies in the kernel of the linearized operator at $u_i$, with $v'$ a cut off version of $v$ and $c'$ the element in $V_{\rm sol}(u_1\#_{\rho} u_2)$ with the same constant value as $c'\in V_{\rm sol}(u_i)$. An element in the fibered product is of the form $v_1'+c'+v_2'$ where $(v_1',c')\in\ker(u_1)'$ and $(v_2', c')\in\ker(u_2)'.$ The bundles $T_{\mathcal{N}}\widehat{\mathcal{X}}\to[\rho_0,\infty)$ and $N\mathcal{N}\to [\rho_0,\infty)$ are locally trivial, with local trivializations given by precomposition with the diffeomorphisms $\psi\colon D_{\rho}\to D_{\rho'}$, in the latter case followed by $L^{2}$-projection to the normal bundle. \begin{Remark}\label{Rem:L2pairing} The $L^{2}$ pairing on $T_{u}\widehat{\mathcal{X}}_{\rho}$ is to be understood as follows. Recall that \[ T_{u}\widehat{\mathcal{X}}_{\rho}=\mathcal{E}(u)\oplus V_{\rm sol}(u). \] We define \[ \langle (v,c) \, , \, (\tilde v, \tilde c)\rangle = \langle v \, ,\, \tilde v\rangle_{\mathcal{H}^{2}_{\delta}} + \langle c \, , \, \tilde c\rangle_{\mathbb{R}^{n}}, \] where the first summand is the pairing on $\mathcal{E}(u)\subset\mathcal{H}^{2}_{\delta}(D_{\rho};\mathbb{C}^{n})$ induced by the weighted $L^{2}$-pairing on the ambient space and the second is the inner product of the values of the cut-off solutions at $0$ in $T_{u(0)}L$. Furthermore on the space \[ T_{w_{\rho}}\mathcal{X}_{\rho}=\mathcal{E}(w_{\rho})\oplus V_{\rm sol}(w_{\rho}) \] we will use the norm \[ \|(v,c)\|_{T_{\mathcal{N}_{\rho}}\mathcal{X}_{\rho}}=\|v\|_{2,\delta}+ \|c\|_{T_{w_{\rho}(0)}L}, \] where $\|\cdot\|_{2,\delta}$ is the weighted Sobolev $2$-norm on $\mathcal{H}^{2}_{\delta}(D_{\rho},\mathbb{C}^{n})$ and where $\|\cdot\|_{T_qL}$ is the norm on the tangent space of $L$ at $q$ induced by the Riemannian metric. We will write $\|\cdot\|_{N\mathcal{N}_{\rho}}$ for the restriction of this norm to the sub-bundle $N\mathcal{N}_{\rho}$. \end{Remark} The Floer equation now gives a smooth bundle map $f\colon T_{\mathcal{N}}\widehat{\mathcal{X}}\to\widehat{\mathcal{Y}}$, where $f_{\rho}\colon T_{\mathcal{N}_{\rho}}\widehat{\mathcal{X}}_{\rho}\to\widehat{\mathcal{Y}}_{\rho}$ is defined as follows. If $w_{\rho}= u_{1}\#_{\rho} u_{2}\in \mathcal{N}_{\rho}$ then for $(v,c)\in T_{w_{\rho}}\widehat{\mathcal{X}}_{\rho}=\mathcal{E}(u)\oplus V_{\rm sol}(u)$: \begin{equation}\label{Eq:Floermaponglued} f_{\rho}(v,c)=\left( d \exp_{w_{\rho}}(v,c) + \gamma_{r(u_1)}\otimes X_{H}(\exp_{w_{\rho}}(v,c)) \right)^{0,1}. \end{equation} Below we will keep the notation $f$ for the restriction $f|_{N\mathcal{N}}$. \subsection{The gluing map} To construct the gluing map \[ \Phi\colon \mathcal{N}\times[\rho_0,\infty)\to \mathcal{F}(0\beta) \] which parametrizes a $C^{1}$-neighborhood of the Gromov-Floer boundary we will apply Lemma \ref{Lem:FloerPicard} to the map $f\colon N\mathcal{N}\to\widehat{\mathcal{Y}}$ defined in \eqref{Eq:Floermaponglued}. In the notation of Lemma \ref{Lem:FloerPicard}: \begin{itemize} \item $[\rho_0,\infty)$ corresponds to $T$, $\mathcal{N}$ corresponds to $M$; \item $N\mathcal{N}$ corresponds to $X$, $\widehat{\mathcal{Y}}$ corresponds to $B$; \item we use the norm $\|\cdot\|_{N\mathcal{N}_{\rho}}$ from Remark \ref{Rem:L2pairing} and the norm $\|\cdot\|_{\widehat{\mathcal{Y}}_{\rho}}$ induced from the weighted Sobolev $1$-norm on $\mathcal{H}^{1}_{\delta}(D_{\rho}, \mathrm{Hom}^{0,1}(TD_{\rho},\mathbb{C}^{n}))$. \end{itemize} \begin{Lemma}\label{Lem:almostholomorphic} There exists $\gamma>0$ such that for any $\xi=((u_1,\zeta),u_2)\in \mathcal{N}$, if $w_\rho=u_{1}\#_{\rho} u_{2}$ then \[ \|f_{\rho}(w_{\rho})\|_{\widehat{\mathcal{Y}}_{\rho}}=\mathcal{O}(e^{-\gamma\rho}). \] \end{Lemma} \begin{proof} Write $u_2=u_2^{\rm st}$. By Fourier expansion near $\infty\in H$, $|u_j^{(k)}(z)|=\mathcal{O}(e^{-(\gamma+\delta)\rho})$, $j=1,2$, $k\le 2$, in the interpolation region where the weight function is bounded by $e^{\delta\rho}$. \end{proof} If $\xi=((u_1,\zeta),u_2)\in\mathcal{N}$, let $w_{\rho}=u_{1}\#_{\rho} u_{2}$. For the next result, in the correspondence with Lemma \ref{Lem:FloerPicard}, the fiber $N_{w_{\rho}}\mathcal{N}$ of the normal bundle $N\mathcal{N}_{\rho}$ at $w_{\rho}$ corresponds to $X_{(m,t)}$ and $w_{\rho}$ corresponds to $0\in X_{(m,t)}$. \begin{Lemma}\label{Lem:partialglu} For $\rho_0$ sufficiently large, the vertical differential \[ d_{w_{\rho}}f_{\rho}\colon N_{w_{\rho}}\mathcal{N}_{\rho}\to \widehat{\mathcal{Y}}_{\rho} \] is surjective and admits a uniformly bounded right inverse. Moreover, the quadratic estimate \eqref{e:QuadraticFP} for the non-linear term in the Taylor expansion of $f_{\rho}$ holds. \end{Lemma} \begin{proof} The quadratic estimate follows from \cite[Proof of Proposition 4.6]{EES}. In order to see that the differential is surjective and admits a uniformly bounded right inverse, we first note that the linearization \[ df_{\rho}\colon T_{w_{\rho}}\widehat{\mathcal{X}}_{\rho}\to \widehat{\mathcal{Y}}_{\rho} \] at $w_{\rho}$ is a Fredholm operator of index $n$. The subspace $T\mathcal{N}_{\rho}\subset T_{w_{\rho}}\widehat{\mathcal{X}}_{\rho}=\mathcal{E}(w_{\rho})\oplus V_{\rm sol}(w_{\rho})$ is also of dimension $n$ and we get a bounded right inverse as desired provided we prove an estimate of the form \[ \|(v,c)\|_{T\widehat{\mathcal{X}}_{\rho}}\le C\|df_{\rho}(v,c)\|_{\widehat{\mathcal{Y}}_{\rho}} \] on the $L^{2}$-complement of $T_{w_{\rho}}\mathcal{N}_{\rho}$. This follows from standard arguments, cf. Lemma \ref{Lem:lingluetriv}. \end{proof} Lemmas \ref{Lem:almostholomorphic} and \ref{Lem:partialglu} have the following consequence: \begin{Corollary} \label{Cor:C1Structure} The Newton iteration map with initial values in $\mathcal{N}_{\rho}=\operatorname{PG}(\mathcal{N}\times\{\rho\})\subset N\mathcal{N}_{\rho}$ gives a $C^{1}$ diffeomorphism \[ \Phi\colon \mathcal{M}^{\ast}(\beta)\times_{L}\mathcal{F}^{\ast}(-\beta)\times[\rho_0,\infty)\to f^{-1}(0), \] where $0$ denotes the $0$-section in $\widehat{\mathcal{Y}}$, such that $\Phi((u_1,\zeta),u_2,\rho)$ limits to a broken curve with components $(u_1,\zeta)\in\mathcal{F}^{\ast}(-\beta)$ and $u_2\in\mathcal{M}^{\ast}(\beta)$ as $\rho\to\infty$. \end{Corollary} \begin{proof} Immediate from Lemma \ref{Lem:FloerPicard}. \end{proof} \subsection{Parameterizing a neighborhood of the Gromov-Floer boundary} In this section we show that $\exp(f^{-1}(0))\subset\widehat{\mathcal{X}}$ gives a $C^{1}$-collar neighborhood $\approx \mathcal{N}\times[0,\infty)$ of the Gromov-Floer boundary of $\mathcal{F}(0\beta)$. There are two main points remaining. First, Corollary \ref{Cor:C1Structure} gives a $C^{1}$ $1$-parameter family, with parameter $\rho\in [\rho_0,\infty)$, of $\mathcal{N}$-families of solutions in $\mathcal{F}(0\beta)$ without giving much explicit information on how the solutions depend on $\rho$, and we must extract such information. Second, we need to establish the surjectivity of our construction, i.e.~show that all disks near the boundary are in the image of the gluing map. Fix $\rho\in [\rho_0,\infty)$, and let $f^{-1}(0)_{\rho}=f^{-1}(0)\cap N\mathcal{N}_{\rho}$. Consider the re-parametrization map \[ \Psi_{\rho}\colon f^{-1}(0)_{\rho}\to\mathcal{F}(0\beta), \] defined as follows. Think of a fixed neighborhood of the marked point $\zeta\in\partial D$ as a half disk $H_{\delta}\subset H$ around $0$. Furthermore, think of $(-\infty,-\rho)\times[0,1]\subset H_{1}$ as the half disk $H_{e^{-\pi\rho}}$ and of $H_{1;\rho}$ as $H_{e^{\pi\rho}}$. Hence, in terms of charts, we write the domain $D$ as $H_{1;\rho}$ with $H_{2;\rho}$ attached by the map which is scaling by $e^{-2\pi\rho}$. An element $u\in N\mathcal{N}_{\rho}$ gives a map $\Psi_{\rho}(u)\colon D\to\mathbb{C}^{n}$, which is the reparametrization above of the map $\exp(u)$, which was originally defined on $D_{\rho}$. The map $\Psi_{\rho}(u)$ solves the Floer equation if and only if $f(u)=0$. Since $\Psi_{\rho}$ is the gluing map, which is a diffeomorphism, followed by a bounded smooth re-parameterization, it is clear that $\Psi_{\rho}$ is an embedding. Let $\Psi\colon f^{-1}(0)\to \mathcal{F}(0\beta)$ be the map which equals $\Psi_{\rho}$ on $f^{-1}(0)_{\rho}$. Note that as $\rho\to\infty$, \[ \inf_{u\in f^{-1}(0)_{\rho}}\{ \sup_{D}|d\Psi(u)|\}\to\infty. \] \begin{Lemma}\label{Lem:gluemb} For $\rho$ sufficiently large, the map $\Psi$ is a $C^{1}$ embedding. \end{Lemma} \begin{proof} The distance functions in $\mathcal{F}(0\beta)$ and $\mathcal{N}$ are induced from configuration spaces which are Banach manifolds with norms that control the $C^{0}$-norm. Since all norms are equivalent to the $C^{0}$-norm for (Floer) holomorphic maps, we may think of all distances $\mathrm{d}(\cdot,\cdot)$ in the calculations below as $C^{0}$-norms. Equation \eqref{Eq:iterest} and our rescaling implies that \begin{equation} \label{Eq:DerivativeBound} \left|\frac{\partial(\Psi_{\rho}(\xi))}{\partial\rho}\right|_{C^{0}}\ge C e^{\pi\rho}. \end{equation} for some constant $C>0$. Assume now that that there is a sequence of pairs $(\xi,\rho)\ne (\xi',\rho')$ such that \[ \mathrm{d}(\Psi_{\rho}(\xi),\Psi_{\rho'}(\xi'))=\mathcal{O}(\mathrm{d}(\xi,\xi')). \] Then it would follow that \begin{equation} \label{Eq:InjectiveGlue} \mathrm{d}(\Psi_{\rho}(\xi),\Psi_{\rho'}(\xi))\le \mathrm{d}(\Psi_{\rho'}(\xi),\Psi_{\rho'}(\xi'))+\mathcal{O}(\mathrm{d}(\xi,\xi'))=\mathcal{O}(\mathrm{d}(\xi,\xi')). \end{equation} However, taking $\epsilon_0>0$ small and $\mathrm{d}(\xi',\xi)\le \epsilon_0$, \eqref{Eq:InjectiveGlue} contradicts the derivative bound \eqref{Eq:DerivativeBound} once $\rho, \rho'$ are sufficiently large. We conclude that for small enough $\epsilon_0$, the map is a $C^{1}$ embedding in an $\epsilon_0$-neighborhood of any point. It follows that the map is also a global embedding: since $\Psi$ approaches the pregluing map as $\rho\to \infty$, there is $\rho_0>0$ so that for $\rho, \rho' > \rho_0$, if $\mathrm{d}(\xi,\xi')>\epsilon_0$ then $\mathrm{d}(\Psi_{\rho}(\xi),\Psi_{\rho'}(\xi'))\ge \frac12\epsilon_0$. \end{proof} We are now ready to state our main gluing result: \begin{Theorem}\label{Thm:gluing} For $\rho_0$ sufficiently large, the map $\Psi\colon \mathcal{N}\times[\rho_0,\infty) \rightarrow \mathcal{F}(0\beta)$ is a $C^{1}$ embedding onto a neighborhood of the Gromov-Floer boundary. \end{Theorem} \begin{proof} Take $\rho_0$ sufficiently large that the conclusion of Lemma \ref{Lem:gluemb} holds. It only then remains to show that (perhaps for some larger $\rho_0$) the map $\Psi$ is surjective onto a neighborhood of the Gromov-Floer boundary of $\mathcal{F}(0\beta)$. A Gromov-Floer convergent sequence $w_{j}$ with non-trivial bubbling converges uniformly on compact subsets. We need to show that $w_j$ eventually lies in the image under the exponential map of a small neighborhood of $\mathcal{N} \subset N\mathcal{N}$ where the Newton iteration map is defined. To see this, we note that by Corollary \ref{Cor:limit3}, for any fixed $\rho_0$, the maps $w_j$ $C^{1}$-converge on the two ends of the strip region $[-\rho+\rho_0,\rho-\rho_0]\times[0,1]$ to $u_1$ and $u_2^{\rm st}$, respectively. On the remaining growing strip we have a holomorphic map which converges to a constant (since there cannot be further bubbling). Because of the weight in the Sobolev norm in pre-glued domains, knowing that the limiting holomorphic map on the strip region is constant is not quite sufficient. However, by action considerations, if $\rho > \rho_0 \gg 0$ is large enough, then both end-segments of $[-\rho+\rho_0,\rho-\rho_0]\times[0,1]$ must map into the same holomorphic coordinate chart. Then the whole strip must map into this chart as well, again for action reasons. Thus, on this region, the map is holomorphic and maps into $(\mathbb{C}^{n},\mathbb{R}^{n})$ with standard holomorphic coordinates. It then has a Fourier expansion \[ z\mapsto c_0+\sum_{n<0} c_n e^{n\pi z}. \] As in the proof of \cite[Theorem 1.3]{Ekholmtrees}, the $C^{0}$-norm near the ends of the strip controls the weighted norm and the shift, i.e. the norm in $\mathcal{E}(w_{\rho})\oplus V_{\rm sol}(w_{\rho})$, where $w_{\rho}=u_{1}\#_{\rho} u_2^{\rm st}$. \end{proof} \section{The index bundle and coherent trivializations} \label{Sec:IndexBundle} In this section we study the index bundle over the configuration space $\mathcal{X}_{\rm sm}$ of smooth maps $(D^2, S^1) \rightarrow (\mathbb{C}^{2k}, W')$ with Lagrangian boundary condition given by the Lagrangian $W' \approx S^{1}\times S^{2k-1} \subset \mathbb{C}^{2k}$ that results from Lagrange surgery on the Whitney sphere. In particular, we establish stable triviality of the index bundle and existence of coherent trivializations. Recall that Lemma \ref{Lem:GaussInterpolate} shows that the corresponding results then hold for $\mathcal{X}_{\rm sm}$ with Lagrangian boundary conditions in a general $L\subset\mathbb{C}^{2k}$ constructed from Lagrange surgery on an immersed homotopy sphere with one double point. The starting point of our analysis is the map $\psi\colon\mathcal{X}_{\rm sm}\to \mathcal{L} W'$ that takes a disk $u\colon (D,\partial D)\to (\mathbb{C}^{n}, W')$ to its restriction to the boundary $u|_{\partial D}$, which is an element in the free loop space $\mathcal{L} W'$ of $W'$. Since the fiber of this map is contractible by linear homotopies, $\psi$ is a homotopy equivalence. Hence it suffices to study the index bundle over $\mathcal{L} W'=\mathcal{L}(S^{1}\times S^{2k-1})$, where the operator at a loop $\gamma$ is the $\bar\partial$-operator with Lagrangian boundary conditions given by the tangent planes of $W'$ along $\gamma$. We write $\mathcal{L}_{j}(S^{1}\times S^{2k-1})$ for the subset of loops in homology class $j\in\mathbb{Z}=H_{1}(S^{1}\times S^{2k-1})$. \subsection{A $(6k-7)$-skeleton for $\mathcal{L}(S^{1}\times S^{2k-1})$} \label{sec:skeleton} We use the decomposition $\mathcal{L}(S^{1}\times S^{2k-1})=\mathcal{L} S^{1}\times \mathcal{L} S^{2k-1}$ and consider the factors separately. The space $\mathcal{L} S^{1}$ is homotopy equivalent to $S^{1}\times \mathbb{Z}$, where $S^{1}\times\{j\}$ consists of the geodesics that traverse $S^{1}$ $j$ times and where the $S^{1}$-coordinate of such a geodesic corresponds to its starting point. (For future reference, we remark that the choice of base-loop in each homotopy class is inessential.) We thus have \[ \mathcal{L} S^{1} \ \simeq \ \bigcup_{j\in\mathbb{Z}} S^{1}_j \] where the subscript denotes the homotopy class. In order to find a skeleton for $\mathcal{L} S^{2k-1}$ we consider the unit tangent bundle \[ \pi\colon U S^{2k-1} \longrightarrow S^{2k-1}. \] Write \[ \kappa\colon Q\longrightarrow US^{2k-1} \] for the vertical tangent bundle of this bundle, with fiber $Q_v=\kappa^{-1}(v)$ at $v\in U S^{2k-1}$ equal to $\ker(d\pi_v)$. We will think of $Q$ geometrically as follows: if $x\in S^{2k-1}\subset\mathbb{R}^{2k}$ is a unit vector in $\mathbb{R}^{2k}$, $|x|=1$, then $U_xS^{2k-1}$ is the $(2k-2)$-sphere of unit vectors $v\in\mathbb{R}^{2k}$ perpendicular to $x$: \[ U_x S^{2k-1}=\{v\in\mathbb{R}^{2k}\colon |v|=1,\; \langle v, x\rangle=0\}, \] and the fiber $Q_{v}$ is the tangent space to $U_{x}S^{2k-1}$ at $v$, which is the $(2k-2)$-space of vectors $q\in\mathbb{R}^{2k}$ that are perpendicular to both $x$ and $v$: \[ Q_v=\{q\in\mathbb{R}^{2k}\colon \langle q, x\rangle=0,\; \langle q,v\rangle=0\}. \] Finally let $\widehat{Q}\to S^{2k-1}$ denote the fiber wise Thom space of $Q$ with fiber over $x\in S^{2k-1}$ the Thom space $M T (U_xS^{2k-1})$, which is the one-point compactification of $T(U_x S^{2k-1})$. We write $*_x$ for the point at infinity in $M T(U_x S^{2k-1})$. We next define an embedding $\Phi\colon\widehat{Q}\to\mathcal{L} S^{2k-1}$ which, as we shall see, gives a $(6k-7)$-skeleton for $\mathcal{L} S^{2k-1}$. Here we think of $\widehat{Q}_x$ as the unit disk bundle in the tangent bundle of $U_xS^{2k-1}$ with all points on the boundary collapsed to the single point $*_x$. We define the map using the geometric interpretation of $Q$ above, as follows. \begin{enumerate} \item For $0\in Q_v$, define $\Phi(0)$ to be the great circle through $x=\pi(v)$ with tangent vector $v$: \[ \Phi[0](t)= (\cos t)x + (\sin t)v, \quad 0\le t\le 2\pi. \] \item For $q\in Q_v$ with $0<|q|\le\frac12$, define $\Phi(q)$ to be the piecewise smooth curve that consists of two great half-circles in the 2-hemisphere determined by $\Phi[0]$ and $q$ meeting at $\pm x$ at an angle $2\pi|q|$: \[ \Phi[q](t)= \begin{cases} (\cos t)x + (\sin t)v, &\text{ for }0\le t\le \pi,\\ (\cos t)x + (\sin t) \left((\cos 2\pi |q|) v + (\sin 2\pi |q|)|q|^{-1}q\right) &\text{ for }\pi\le t\le 2\pi. \end{cases} \] Note that $\Phi[q]=\Phi[q']$ for any $q,q'$ with $|q|=|q'|=\frac12$. \item For $q\in Q_v$ with $\frac12\le |q|\le 1$, define $\Phi[q]$ to be a shortening of the curve $\Phi[q_0]$ for $|q_0|=\frac12$: \[ \Phi[q](t) =\begin{cases} (\cos 2(1-|q|)t)x + (\sin 2(1-|s|)t)v, &\text{ for }0\le t\le \pi,\\ (\cos 2(1-|q|)(2\pi-t))x + (\sin 2(1-|q|)(2\pi-t))v, &\text{ for }\pi\le t\le 2\pi. \end{cases} \] \item Define $\Phi[*_x]$ as the constant curve at $x$. \end{enumerate} \begin{Lemma}\label{Lem:skeleton} The image $\Phi(\widehat{Q})\subset \mathcal{L} S^{2k-1}$ of the embedding $\Phi$ is a $(6k-7)$ skeleton of $\mathcal{L} S^{2k-1}$, in the sense that the pair $(\mathcal{L} S^{2k-1},\Phi(\widehat{Q}))$ is $(6k-7)$-connected. \end{Lemma} \begin{proof} Consider the fibration of the free loop space $\pi\colon \mathcal{L} S^{2k-1}\to S^{2k-1}$, with fiber $\pi^{-1}(x)=\Omega (S^{2k-1},x)$, the space of loops based at $x$. Consider the round metric on $S^{2k-1}$. This induces an energy functional on $\Omega(S^{2k-1},x)$ which is a Bott-Morse function and which has critical manifolds as follows: \begin{enumerate} \item A point corresponding to the constant geodesic at $x$. \item A $(2k-2)$-sphere $S^{2k-2}_j$ of geodesic loops that start at $x$ and traverse a great circle $j$ times. The index of $S^{2k-2}_j$ is $(2j-1)(2k-2)$, since a geodesic $\gamma$ that traverses a great circle $j$ times has $2k-2$ linearly independent normal Jacobi-fields that vanish at $ \pm x$. Thus, there are $2j-1$ conjugate points along $\gamma$, each of multiplicity $2k-2$. \end{enumerate} Consider the gradient flow lines connecting $S^{2k-2}_1$ to the constant geodesic at $x$. Fix some $v\in S^{2k-2}_1$. The part of the stable manifold of $x$ emanating from $v$ can be identified with the disk inside $S^{2k-1}$ normal to the geodesic $v$ at its base-point $x$. By symmetry of the metric, the flow line in any fixed direction $q$ shrinks the great circle $v$ inside the 2-hemisphere determined by $v$ and $q$. The map $\Phi$ defined above gives a particular such shrinking, which is homotopic to the gradient flow. Considering such gradient flows for varying $x$, it follows that the unstable manifold of $S^{2k-1}_1$ is homotopic to $\Phi(\widehat{Q})$. Since for $j\geq 2$ the index of the Bott manifold $S_j^{2k-2}$ is at least $(2j-1)(2k-2)\ge 6k-6$ the lemma follows. \end{proof} Summing up, we get a $(6k-7)$-skeleton of $\mathcal{L}_{j}(S^{1}\times S^{2k-1})$ of the form $S^{1}_{j}\times \widehat{Q}$, where a point $(y,q)$ corresponds to the loop which first traverses the loop corresponding to $y$ inside the $S^{1}$-factor, while being constant at the base point in the $S^{2k-1}$-factor, and then follows the loop $\Phi[q]$ in the $S^{2k-1}$-factor, while being constant at the base point in the $S^{1}$-factor. \subsection{A linear gluing lemma} \label{Sec:LinearGluing} In this section we establish a linear gluing result that we use both to construct stable trivializations and in formulating the definition of coherent stable trivializations. The argument is standard; we present a version tailored for the applications in this paper. Essentially the same argument would establish the uniform bound of the right inverse operator in Lemma \ref{Lem:partialglu}. Let $D$ be a disk with a loop $\{ \Lambda(z) \subset \mathbb{C}^n\}_{z\in\partial D}$ of Lagrangian $n$-planes in $\mathbb{C}^{n}$ specified along its boundary. Let $\bar\partial_{D}$ denote a Cauchy-Riemann operator on $\mathbb{C}^{n}$-valued vector fields on $D$. We write $\mathcal{H}^{2}(D;\Lambda)$ for the Sobolev space of maps $v\colon D\to\mathbb{C}^{n}$ with two derivatives in $L^{2}$ and which satisfy the following: \begin{enumerate} \item $v(z)\in \Lambda(z)$, $z\in\partial D$ and \item $\bar\partial_{D} v|_{\partial D}=0$ (where the restriction to the boundary should be interpreted as the trace). \end{enumerate} We write $\dot \mathcal{H}^{1}(D;\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n}))$ for the Sobolev space of complex anti-linear maps $TD\to \mathbb{C}^{n}$ with one derivative in $L^{2}$ that vanish on the boundary, and write \[ \bar\partial_{\Lambda}\colon\mathcal{H}^{2}(D;\Lambda)\to \mathcal{H}^{1}(D;\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n})) \] for the operator that maps $v$ to $\bar\partial_{D}v$. Then $\bar\partial_{\Lambda}$ is a Fredholm operator of index \[ \mathrm{ind}(\bar\partial_{\Lambda})=n+\mu(\Lambda), \] where $\mu(\Lambda)$ is the Maslov index of $\Lambda$. The kernel of $\bar\partial_{\Lambda}$ is spanned by smooth vector fields. Fix $\zeta\in\partial D$, puncture $D$ at $\zeta$, and identify a neighborhood of $\zeta$ with a half-infinite strip $[0,\infty)\times[0,1]$ as in Section \ref{sec:cohertriv}. Fix $\delta\in(0,\pi)$ and consider the Sobolev space $\mathcal{H}^{2}_{\delta}(D,\zeta;\Lambda)$ weighted by a function which equals $1$ except in the half strip around $\zeta$ where it is given by $e^{\delta|\tau|}$ for $\tau+it\in[0,\infty)\times[0,1]$. Let $\dot\mathcal{H}^{1}_{\delta}(D,\zeta;\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n}))$ denote the corresponding weighted Sobolev space of complex anti-linear maps that vanish along the boundary. Assume now that $\Lambda(z)=\mathbb{R}^{n}\subset\mathbb{C}^{n}$ in a neighborhood of $\zeta$ and that $\bar\partial_{D}$ agrees with the standard $\bar\partial$-operator in some neighborhood of $\zeta$. Fix a cut-off function $\alpha\colon [0,\infty)\times[0,1]$ which equals $1$ for $\tau>\tau_0$, which equals $0$ for $\tau<\tau_0-1$, and which is real valued and holomorphic on the boundary. Assume furthermore that $\tau_0$ is sufficiently large that the boundary condition $\Lambda$ is constant and the operator $\bar\partial_{D}$ is standard for $\tau>\tau_0$. Let \[ V_{\rm sol}(\zeta)=\mathbb{R}\langle \alpha e_1,\dots,\alpha e_n\rangle, \] where $e_1,\dots,e_n$ is the standard basis in $\mathbb{R}^{n}$, so elements in $V_{\rm sol}(\zeta)$ are cut off constant solutions. For $v_{\rm sol}\in V_{\rm sol}(\zeta)$, define $\bar\partial_{\Lambda} v_{\rm sol}=\bar\partial v_{\rm sol}\in\dot\mathcal{H}^{1}_{\delta}(D,\zeta;\mathrm{Hom}^{0,1}(TD;\mathbb{C}^{n}))$. Then the operator \[ \bar\partial_{\Lambda}\colon \mathcal{H}^{2}_{\delta}(D,\zeta;\Lambda)\oplus V_{\rm sol}(\zeta)\to \dot\mathcal{H}^{1}_{\delta}(D,\zeta;\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n})) \] is Fredholm of index $n+\mu(\Lambda)$, and there is a canonical isomorphism between the kernel of this operator and the kernel of the operator discussed above. To see this, note that Taylor coefficients at $\zeta$ of solutions of the problem on the closed disk correspond to Fourier coefficients of solutions on the punctured disk, where the added cut-off solutions on the Fourier side provide the constant terms in the Taylor expansion. We next consider stabilizations. Let $A_1,\dots,A_m\in \dot\mathcal{H}^{1}(D;\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n}))$. For convenience we take the $A_j$ to be smooth sections with support outside a neighborhood of the marked point $\zeta\in D$. We define \[ \psi\colon \mathbb{R}^{m}\to \dot\mathcal{H}^{1}_{\delta}(D;\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n})) \] by $\psi(e_j)=A_j$, $j=1,\dots,m$, where $e_1,\dots,e_m$ are basis vectors in $\mathbb{R}^{m}$. We now consider the stabilized operators \begin{align} \label{Eq:StabilizedOperators} \bar\partial_{\Lambda}\oplus\psi&\colon \mathcal{H}^{2}(D;\Lambda)\oplus \mathbb{R}^{m}\to \mathcal{H}^{1}(D;\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n})), \\ \bar\partial_{\Lambda}\oplus\psi&\colon \mathcal{H}^{2}_{\delta}(D,\zeta;\Lambda)\oplus V_{\rm sol}(\zeta)\oplus\mathbb{R}^{m}\to \dot\mathcal{H}^{1}_{\delta}(D,\zeta;\mathrm{Hom}^{0,1}(TD,\mathbb{C}^{n})), \end{align} where the second operator is stabilized exactly as the first (recall that we took the images $\psi(e_j) = A_j$ to be smooth sections supported outside the strip neighborhood of the puncture). Again we have a canonical identification between the kernels of the two problems. Consider now two operators as above: $\bar\partial_{\Lambda_1}$ on $D_1$ and $\bar\partial_{\Lambda_2}$ on $D_2$. Assume that $D_1$ is punctured at $\zeta$ and $D_2$ at $1$, and consider the punctured versions of the operators as described above. Assume furthermore that $\Lambda_1(\zeta)=\Lambda_2(1)$. Then we can pre-glue the domains, see Section \ref{sec:cohertriv} and \ref{sec:pregluemaps}, to obtain a domain $D_\rho$ with an induced weight function. Furthermore, the boundary conditions $\Lambda_1$ and $\Lambda_2$ give a boundary condition $\Lambda_1\#\Lambda_2$ on $\partial D_{\rho}$ and the operators glue to an operator $\bar\partial_{\rho}$ on $\mathbb{C}^{n}$-valued vector fields over $D_\rho$. Define $\widehat{\mathcal{H}}_{\delta}^{2}(D_{\rho};\mathbb{C}^{n})$ as the weighted Sobolev space with two derivatives in $L^{2}$, of vector fields which satisfy the usual boundary conditions, and the extra condition that $v(0)=0$, where $0$ is the distinguished point in the strip region $[-\rho,\rho]\times[0,1]\subset D_{\rho}$, see Section \ref{sec:cohertriv}. Let $V_{\rm sol}(0)$ denote the space of cut-off constant solutions in this strip region. Consider the operator \[ \bar\partial_{\rho}\colon \widehat{\mathcal{H}}_{\delta}^{2}(D_{\rho};\mathbb{C}^{n})\oplus V_{\rm sol}(0) \to \dot\mathcal{H}_{\delta}^{1}(D_{\rho};\mathrm{Hom}^{0,1}(TD_{\rho},\mathbb{C}^{n})) \] Assume now we have stabilizations $(\psi_1,\mathbb{R}^{m_1})$ and $(\psi_2,\mathbb{R}^{m_2})$ of $\bar\partial_{\Lambda_i}$ that make the operators of \eqref{Eq:StabilizedOperators} surjective. They then induce a stabilization $(\psi_1\oplus\psi_2,\mathbb{R}^{m_1+m_2})$ of $\bar\partial_{\rho}$. Let $\psi_0$ denote the auxiliary stabilization that adds another copy of $V_{\rm sol}(0)$ to the domain, this copy being orthogonal to the first with respect to the $L^{2}$-pairing, see Remark \ref{Rem:L2pairing}. Note also that there are evaluation maps $\operatorname{ev}_{\zeta}\colon\ker(\bar\partial_{\Lambda_1}\oplus\psi_1)\to \mathbb{R}^{n}$ and $\operatorname{ev}_1\colon\ker(\bar\partial_{\Lambda_2}\oplus\psi_2)\to\mathbb{R}^{n}$. We say that the kernels are transverse at the gluing point if these images together span $\mathbb{R}^{n}$. \begin{Lemma}\label{Lem:lingluetriv} For all $\rho$ that are sufficiently large, $L^{2}$-projection gives an isomorphism \[ \ker(\bar\partial_{\rho}\oplus\psi_{1}\oplus\psi_{2}\oplus\psi_0)= \ker(\bar\partial_{\Lambda_1}+\psi_{1})\oplus \ker(\bar\partial_{\Lambda_2}+\psi_{2}). \] Furthermore, if the kernels are transverse at the gluing point then $L^{2}$-projection gives an isomorphism \[ \ker(\bar\partial_{\rho}\oplus\psi_{1}\oplus\psi_{2})= \ker(\bar\partial_{\Lambda_1}+\psi_{1})\times_{\mathbb{R}^{n}} \ker(\bar\partial_{\Lambda_2}+\psi_{2}), \] where the fibered product is with respect to the evaluation map $\operatorname{ev}_\zeta\times\operatorname{ev}_1$ to $\mathbb{R}^{n}\times\mathbb{R}^{n}$. \end{Lemma} \begin{proof} We start with the first statement. Write $L=\bar\partial_{\rho}\oplus\psi_{1}\oplus\psi_{2}\oplus\psi_{0}$ and observe that the index of \[ L\ \colon \ \widehat{\mathcal{H}}_{\delta}^{2}(D_{\rho};\mathbb{C}^{n})\oplus \mathbb{R}^{m_1 + m_2} \oplus V_{\rm sol}^{1}(0)\oplus V^{2}_{\rm sol}(0) \ \longrightarrow \ \dot\mathcal{H}_{\delta}^{2}(D_{\rho};\mathrm{Hom}^{0,1}(TD_{\rho},\mathbb{C}^{n})), \] where the superscripts on the $V_{\rm sol}(0)$ are just to tell the two copies apart, equals \begin{align*} &n+\mu(\Lambda_1\#\Lambda_2)+m_1+m_2+n \ = \ (n+\mu(\Lambda_{1})+m_1)+(n+\mu(\Lambda_{2})+m_2)\\ &=\dim(\ker(\bar\partial_{\Lambda_{1}}\oplus\psi_1))+\dim(\ker(\bar\partial_{\Lambda_{2}}\oplus\psi_2)). \end{align*} Consider an element $v_{j}$ in the kernel of $\bar\partial_{\Lambda_j}+\psi_j$. It can be written uniquely as \[ v_j=v_{j;\delta} + v_{j;{\rm sol}} + w_j, \] where $v_{j;\delta}\in \mathcal{H}^{2}_{\delta}(D,\zeta;\Lambda_j)$, $v_{j;{\rm sol}}\in V_{\rm sol}(\zeta)$, and $w_j\in\mathbb{R}^{m_j}$. Let $\beta_j$ be a smooth cut-off function that equals $1$ on $H_{1;\rho+1}$, equals $0$ outside $H_{1;\rho+2}$, and is real valued and holomorphic on the boundary. Define \[ \tilde v_j=\beta_j v_{j;\delta} + v^{j}_{\rm sol} + w_j, \] where $v^{j}_{\rm sol}$ is the cut-off constant solution in $V_{\rm sol}^{j}(0)$ with the same value as $v_{j;{\rm sol}}$. Pick bases $v_1^{1},v_{1}^{2},\dots,v_{1}^{M_{1}}$ and $v_2^{1},v_{2}^{2},\dots,v_{2}^{M_{2}}$ in $\ker(\bar\partial_{\Lambda_{1}}\oplus\psi_{1})$ and $\ker(\bar\partial_{\Lambda_{2}}\oplus\psi_{2})$, respectively. We claim that the operator $L$ is invertible on the $L^{2}$-complement of the subspace spanned by \[ \tilde v^{1}_{1} \ , \ \dots \ ,\ \tilde v^{M_1}_{1} \ , \ \tilde v^{1}_{2} \ , \ \dots \ , \ \tilde v_{2}^{M_2}. \] Indeed, suppose not. Then there exists a sequence $u_j$ in the $L^{2}$-complement such that \[ \|u_j\|=1,\quad \|Lu_{j}\|\to 0\text{ as }j\to\infty. \] Write \[ u_j= u_{j;\delta} + u_{j; {\rm sol}}^{1} + u_{j;{\rm sol}}^{2}, \] where $u_{j;{\rm sol}}^{k}\in V_{\rm sol}^{k}(0)$, $k=1,2$. Consider now the sequence \[ u_{1;j}= \beta_{1}u_{j;\delta} + u_{j;{\rm sol}}^{1}. \] These functions are orthogonal to the kernel of $\bar\partial_{\Lambda_1}+\psi_1$ and $\|L u_{1;j}\|\to 0$. Thus $u_{1;j}\to 0$. Repeating this argument for the other half of the disk, we find that $u_{2;j}\to 0$; but then $u_j\to 0$, which contradicts $\|u_j\|=1$. We conclude the desired invertibility, and that $L^{2}$-projection gives an isomorphism, as claimed. To prove the last statement we argue similarly, inverting instead on the fibered product of the kernels. The only difference is in the last step: there is now only one copy of $V_{\rm sol}(0)$, and writing the component of $u_j$ along this copy as $u_{j;{\rm sol}}$ our transversality assumption implies that either $\beta_1u_{j;\delta}+ u_{j;{\rm sol}}$ is orthogonal to the (approximate) kernel of $\bar\partial_{\Lambda_1}\oplus\psi_1$ or $\beta_2u_{j;\delta}+ u_{j;{\rm sol}}$ is orthogonal to that of $\bar\partial_{\Lambda_2}\oplus\psi_2$. This means that we can extract a subsequence for which one of these alternatives hold. Noting that $\beta_ku_{j;\delta}$ is orthogonal to the kernel of $\bar\partial_{\Lambda_{k}}\oplus\psi_k$, $k=1,2$ we then get a contradiction using this subsequence, exactly as in the first argument. \end{proof} \subsection{Stable trivializations} Write $\mathbf{I}_{j}$ for the index bundle over $\mathcal{L}_{j}W'$ with operator over the loop $\gamma$ \begin{equation}\label{Eq:linoploop} D(\bar\partial_{j\beta})\colon T_{u}\mathcal{X}\to \mathcal{Y}, \end{equation} where $u\colon (D,\partial D)\to(\mathbb{C}^{n},L)$ is any map with $u|_{\partial D}=\gamma$. Note that \eqref{Eq:linoploop} is independent of the particular choice of $u$. For convenient notation, we write \[ \bar\partial_{\gamma}\colon \SS_{\gamma}\to \mathcal{Y} \] for the operator in \eqref{Eq:linoploop}. We remark that in the subsequent computations, we work with the \emph{standard} $\bar\partial$-operator (for the almost complex structure $J_0$, with no Hamiltonian perturbation), which is sufficient since we are essentially working up to homotopy. \begin{Lemma}\label{Lem:indextrivial} $\mathbf{I}_{j}$ is stably trivial over the $(6k-7)$-skeleton $S^{1}_{j}\times\Phi(\widehat{Q})$ from Lemma \ref{Lem:skeleton}. \end{Lemma} \begin{proof} Consider the model of $W'$ from Example \ref{Ex:LefschetzWhitney}: \begin{equation} \label{Eq:WhitneyAgain} W'=\bigcup_{e^{i\theta}\in S^{1}} e^{\frac{i\theta}{2}}\cdot S^{2k-1} \;\;\subset \;\; \mathbb{C}^{2k}. \end{equation} Recall that we think of the loops $\gamma$ in the skeleton $S^{1}_{j}\times\Phi(\widehat{Q})$ as loops which first go around the $S^{1}$-factor and then along the $S^{2k-1}$-factor. We take the corresponding operator $\bar\partial_{\gamma}$ to act on vector fields on a pre-glued disk $D_{\rho}=H_{1}\#_{\rho} H_{2}$ for some sufficiently large $\rho>0$, where the sub-loops of $\gamma$ in the $S^{1}$ and $S^{2k-1}$ factor are parameterized by $\partial H_1$ and $\partial H_{2}$, respectively. It follows from Lemma \ref{Lem:lingluetriv} that it is sufficient to find stable trivializations over loops which are constant in either factor separately. For the $S^{1}$-factor this is obvious since there is only one loop. We thus consider trivializations over loops in the $S^{2k-1}$-factor that are constant in the $S^{1}$-factor. Take the constant value to be $1\in S^{1}$ and write $(x,v,q)$ for points in $\widehat{Q}=MTUS^{2k-1}$, where $v\in U_{x}S^{2k-1}$ and $q\in Q_{v}$, cf. the notation of Section \ref{sec:skeleton}. Consider first $\bar\partial_{\Phi(x,v,0)}$ with Lagrangian boundary condition corresponding to a simple closed geodesic starting at $x$ with tangent vector $v$. Recalling that we use the complex structure $J_0$, the operator $\bar\partial_{\Phi(x,v.0)}$ splits into one-dimensional problems as follows. \begin{enumerate} \item There are $(2k-2)$ one-dimensional problems with constant real boundary conditions, corresponding to a set of basis vectors in $x^{\perp}\cap v^{\perp}$. \item In the two-dimensional space spanned by $x$ and $v$, the Lagrangian boundary condition along $(\cos t)x+(\sin t)v$, $0\le t\le 2\pi$ is spanned by the vectors \[ i\bigl((\cos t)x+(\sin t)v\bigr)\quad\text{ and }\quad (-\sin t)x +(\cos t)v. \] In the complex basis $ix + v, ix -v$ this boundary condition splits into the two one-dimensional boundary conditions \[ e^{it}(ix +v) \quad\text{ and }\quad e^{-it}(ix-v), \] where the former has Maslov index $2$ (hence index $3$), and the latter Maslov index $-2$ (hence index $-1$). \end{enumerate} The argument principle implies that one-dimensional problems have only kernel or only cokernel. We thus find that \[ \ker(\bar\partial_{\Phi(x,v,0)})=\bigl\langle \nu_1,\dots,\nu_{2k-2}, a_1,a_2,a_3 \bigr\rangle, \] where $\langle\cdot\rangle$ denotes the linear span, where $\nu_j$ are linearly independent constant solutions in directions normal to $x$ and $v$, where $a_1,a_2,a_3$ are three linearly independent vector fields in the $(ix+v)$-line which correspond to linearized automorphisms of the unit disk in this complex line. Similarly, we deduce that \[ \mathrm{coker}(\bar\partial_{\Phi(x,v,0)})=\bigl\langle b \bigr\rangle, \] where $b$ is a vector field in the $(ix-v)$-line which $L^{2}$-pairs non-trivially with the constant solution of the adjoint problem. We next consider the family of operators $\bar\partial_{\Phi(x,v,q)}$ for $q$ in the $(2k-2)$-disk $Q_v$ which we think of as the normal disk to $\Phi(x,v,0)$ at its start point, cf. the proof of Lemma \ref{Lem:skeleton}. For $q$ in an $\epsilon$-disk in $Q_{v}$ we keep the boundary condition constant, but change the disk to a pre-glued disk $D_{\rho}=H_1\#_\rho H_2$ where the first half of the boundary condition, $0\le t\le \pi$, is parameterized by $\partial H_1$ and the second half by $\partial H_2$. The argument principle shows that the kernel and cokernel are unchanged by such deformations. However, by (the second statement in) Lemma \ref{Lem:lingluetriv}, for $\rho>0$ large enough we get an approximate kernel, isomorphic to the actual kernel by $L^{2}$-projection, spanned by the following sections: \begin{enumerate} \item constant kernel functions in directions perpendicular to $x$ and $v$, \item a cokernel function which pairs non-trivially with the constant solution of the adjoint problem in the $(x-iv)$-line, \item three approximate kernel functions in the $(x+iv)$-line: the solution over $H_1$ that vanishes at the puncture, the solution over $H_2$ that vanishes at the puncture, and the sum of the solutions in $H_1$ and $H_2$ that both equal $1$ at the puncture, cf.~ Lemma \ref{Lem:lingluetriv}. \end{enumerate} We now fix $q$, and start rotating the boundary condition of the second half-plane in direction $q$, as in the definition of $\Phi$, see Section \ref{sec:skeleton}. For rotation angle in $(0,\pi)$ we claim that the kernel is $2k$-dimensional and the cokernel trivial. To see this we invert the operator on the complement of the space spanned by the following: \begin{enumerate} \item the $(2k-3)$-dimensional space of constant solutions (orthogonal to $x,v,q$); \item the cut off solutions of the pieces that satisfy the incidence condition at the gluing point. \end{enumerate} The usual linearized gluing argument, Lemma \ref{Lem:lingluetriv}, shows that this is possible: as in that Lemma, one shows that for a sequence of functions perpendicular to the space spanned by the elements above, the functions must tend to zero in each of $H_1$, $H_2$, and in the middle strip. (The difference with the degenerate case considered previously is that now the estimate for the problem over $H_1$ implies that the $v$-component of a function in the $L^2$-orthogonal must vanish, whilst that over $H_2$ implies that the $q$-component must vanish.) When the rotation angle equals $\pi$ we still get a $2k$-dimensional kernel, now spanned by the $(2k-2)$-dimensional space of constant sections perpendicular to $\Phi_{(x,v,0)}$ and the two solutions on $H_1$ and $H_2$ that vanish at the point where the disks are joined. We finally shorten the curve until it is constant, and the argument principle again implies that the kernel does not change. In summary, after stabilizing with one auxiliary direction we may identify the vectors in the kernel with $ix+v$ and $ix-v$ (by evaluation at $1$). Hence the kernel of the once stabilized problem gives a vector bundle over the unstable manifold of $S_1^{2k-2}$, for which a stable trivialization of the constant loops (corresponding to a trivialization of $TS^{2k-1}$ over the $*$-section) extends, as described, over the fibers $Q_v$. The lemma follows. \end{proof} We next consider \emph{stable trivializations} of these stably trivial index bundles. Let $A\subset \mathcal{L}_{j} W'$ be a compact subset. Then there exists $N>0$ and a map $\psi\colon A\to\mathrm{Hom}(\mathbb{R}^{N},\mathcal{Y})$, $\gamma\mapsto \psi_{\gamma}$ such that for any $\gamma\in A$, \[ \bar\partial_{\gamma}\oplus\psi_\gamma\colon \SS_{\gamma}\oplus \mathbb{R}^{N}\to\mathcal{Y} \] is surjective. Lemma \ref{Lem:indextrivial} implies that the bundle $\ker(\bar\partial\oplus\mathbb{R}^{N})$ over $A$ with fiber over $\gamma\in A$ given by $\ker(\bar\partial_{\gamma}\oplus \psi_\gamma)$ is trivial over any CW-complex $B \subset A$ of dimension at most $6k-7$. A \emph{stable trivialization} of $\mathbf{I}_j$ over $B$ is the stable homotopy class of a trivialization of $\ker(\bar\partial \oplus \mathbb{R}^N)|_{B}$ where we stabilize by adding an arbitrary map $\psi'\colon B\to\mathrm{Hom}(\mathbb{R}^{M},\mathcal{Y})$. To clarify, observe that if $\bar\partial_{\gamma}\oplus\psi_\gamma$ is surjective then for each $e\in\mathbb{R}^{M}$ there is $v\in \SS_{\gamma}\oplus\mathbb{R}^{N}$ such that $\bar\partial_{\gamma}(v)+\psi_\gamma(v)=-\psi'(e)$, and such $v$ is unique up to addition of a vector in $\ker(\bar\partial_{\gamma}\oplus\psi_\gamma)$. It follows that a trivialization $Z$ of $\ker(\bar\partial\oplus\mathbb{R}^{N})$ and the standard basis in $\mathbb{R}^{M}$ induces a trivialization of $\ker(\bar\partial\oplus\mathbb{R}^{N}\oplus\mathbb{R}^{M})$. \begin{Remark} Consider stabilizations $Z$ of $\bar\partial\oplus\mathbb{R}^{N}$ and $Z'$ of $\bar\partial\oplus \mathbb{R}^{M}$, which both give everywhere surjective operators. To compare the trivializations on the kernel bundles, we consider the bundle $\ker(\bar\partial\oplus\mathbb{R}^{N}\oplus\mathbb{R}^{M})$, and compare the trivializations $Z\oplus \mathbb{R}^{M}$ and $Z'\oplus\mathbb{R}^{N}$. \end{Remark} \begin{Remark} In the applications below we will only be concerned with compact subsets of the various mapping spaces involved. For simplicity, we fix throughout a sufficiently large compact subset of the loop space which contains all the spaces (and homotopies) relevant to our problem. \end{Remark} \begin{Remark}\label{Rem:stabletrivhmtpy} If $B\subset\mathcal{L}_{j}W'$, if $Z$ is a stable trivialization of $\mathbf{I}_j$ over $B$, and if $g\colon C\to B$ is any map, then $Z$ induces a stable trivialization $g^{\ast}Z$ of $g^{\ast}\mathbf{I}_j$, $g^{\ast}Z(c)=Z(g(c))$. In particular, if $\Sigma\subset B$, and if $g\colon C\to B$ is a map which is homotopic to a map into $\Sigma$ then the stable trivialization $g^{\ast}Z$ is determined by the restriction $Z|_{\Sigma}$. Indeed, fix a homotopy $g_t\colon C\to B$, $0\le t\le 1$ with $g_0=g$ and $g_{1}(C)\subset \Sigma$; then the homotopy of trivializations $Z(g_t(c))$ connects $g_0^{\ast}Z$ to $g_{1}^{\ast}Z$, and the latter is determined by $Z|_\Sigma$. \end{Remark} \subsection{Pre-gluing and coherent stable trivializations} We next focus on the loop space components relevant to our main problem: $\mathcal{L}_{-1} W'$, $\mathcal{L}_{1} W'$, and $\mathcal{L}_{0} W'$. Write $\mathcal{L}_{j}^{\ast}W'=\mathcal{L}_{j}W'\times \partial D$ for the space of free loops with one marked point and let $\mathbf{I}_j^{\ast}$ denote the pull-back of the index bundle $\mathbf{I}_j$ under the natural projection map that forgets the marked point. We recall the notion of coherent trivializations from Section \ref{sec:cohertriv}. Consider a compact CW-complex $N$ with maps $p\colon N\to \mathcal{L}_{-1}^{\ast} W'$ and $q\colon N\to\mathcal{L}_{1} W'$ such that $\operatorname{ev}\circ \,p=\operatorname{ev}_1\circ\, q$. Assume furthermore that the map $p$ factors as follows: \[ \begin{CD} N @>{p'}>> A\times S^{1} @>{a\times\mathrm{id}}>> \mathcal{L}_{-1}^{\ast} W' = \mathcal{L}_{-1} W'\times S^{1} \end{CD}, \] where $A$ is a compact CW-complex. Write $\operatorname{PG}\colon N\to\mathcal{L}_{0}W'$ for the map $\operatorname{PG}\circ\, (p\times q)$ and consider the pull-back bundle $\operatorname{PG}^{\ast}\mathbf{I}_0$ over $N$. By Lemma \ref{Lem:lingluetriv} we find that there are two stable trivializations of this bundle: one given by $p^{\ast}Z_{-1}^{\ast}\oplus q^{\ast}Z_{1}$ and one given by $\operatorname{PG}^{\ast}(Z_{0}\oplus Z_{TW'})$, where $Z_{TW'}$ is a fixed trivialization of $TW'$. The triple of trivializations $Z_{-1},Z_{1}, Z_{0}$ were called $(d',d)$-coherent if the two stable trivializations $p^{\ast}Z_{-1}^{\ast}\oplus q^{\ast}Z_{1}$ and $\operatorname{PG}^{\ast}(Z_{0}\oplus Z_{TW'})$ are homotopic for all $N$ and $A$ as above with $\dim(A)\le d'$ and $\dim(N)\le d$. In the following Lemma, trivializations of $\mathbf{I}_j$ refer to trivializations over $S_{j}^{1}\times\Phi(\widehat{Q})$, see Lemma \ref{Lem:indextrivial}. In the proof, we use notation as in the definition of stable trivializations in Section \ref{sec:cohertriv}. \begin{Lemma} For any stable trivialization $Z_{-1}^{\ast}$ of $\mathbf{I}^{\ast}_{-1}$, there are stable trivializations $Z_{1}$ of $\mathbf{I}_{1}$ and $Z_{0}$ of $\mathbf{I}_0$ such that $(Z_{-1}^{\ast},Z_{1},Z_{0})$ is $(1,d)$-coherent for any $d \leq 6k-7$. \end{Lemma} \begin{proof} Remark \ref{Rem:stabletrivhmtpy} implies that the trivialization $p^{\ast}Z_{-1}^{\ast}$ is determined by the restriction of the trivialization $Z_{-1}^{\ast}$ to the preimage of the $1$-skeleton of $\mathcal{L}_{-1}W'$. We take the $1$-skeleton as $S^{1}\times \{x\}$, where the $S^{1}$-factor determines the staring point of the geodesics which are constant curves at $x$ in the $S^{2k-1}$-factor. More explicitly, fix a homotopy $a_{t}$ of the map $a$ such that $a_{1}$ maps to the $1$-skeleton. Then $\operatorname{ev}\circ(a_t\times\mathrm{id})$ gives a homotopy from $\operatorname{ev}_{1}\circ \,q$ to $\operatorname{ev}_{1}\circ \,q_1$, where $q_1$ is the map which conjugates the loop $q$ with the trace of the homotopy $\operatorname{ev}\circ(a_t\times\mathrm{id})$. Therefore $\operatorname{ev}_1$ is homotopic to a map with second component mapping into the based loop space $\Omega(S^{2k-1},x)$. We can perform a further homotopy of $q_1$, without changing the start-points of loops in the image, so that the $S^{1}$-component of any one of its loops is also geodesic, and so that the loops in the image have the standard product form discussed previously (first following the $S^{1}$-component, then the $S^{2k-1}$-component). It follows that, after a homotopy of $p$ and $q$ and the induced homotopy of $\operatorname{PG}\circ(p\times q)$, the map $\operatorname{PG}\colon N\to\mathcal{L}_{0}W'$ factors through the fibered product $\Xi$ of (the preimages of) the $1$-skeleton $S^{1}\times\{x\}\times S^{1}\subset \mathcal{L}_{-1}^{\ast} W'$, and the $d$-skeleton of $S^1 \times \Omega(S^{2k-1},x)\, \subset \mathcal{L}_1(W')$. We now use a skeleton of $\mathcal{L}_{0}W'$ which differs from the one constructed previously as follows: every loop in the second factor is replaced by a negatively oriented geodesic with a positively oriented geodesic inserted at $-1$ (the antipodal point of the base point). It is then easy to see that the map $N \rightarrow \mathcal{L}_0W'$ is homotopic to a map in which all $S^{1}$-components have the form of the loops in $\Xi$. Conveniently, the fibered product defining $\Xi$ is actually homeomorphic to a cell complex. More explicitly, the fibered product of the lowest-dimensional cells, meaning those of dimension $\leq 6k-7$, is homeomorphic to \begin{equation} \label{Eq:FibreProductOfCells} S^{1}\times S^{1\ast}\times S_1^{2k-2}, \end{equation} where the first coordinate on the torus $S^{1}\times S^{1\ast}$ is the starting point of the loop, the second is the point along the parameterizing $S^{1}$, and the final factor corresponds to the Bott manifold $S^{2k-2}_1$ of Lemma \ref{Lem:skeleton} (recall that the higher cells in the based loop space have index $\geq 6k-6$). From the construction of the skeleton, and the fact that we are dealing with the component $\mathcal{L}_0 W'$ containing the constant loops, the starting point loop maps to a non-contractible loop in $\mathcal{L}_{0}W'$, whilst the second factor $S^{1\ast}$ maps to a contractible loop. The condition that ``the trivialization of the pull-back agrees with the pullback of the trivialization" is that the loop which is mapped to a trivial loop under $\operatorname{PG}$ gets the trivial (null-cobordant) framing. The framing of this loop is given by \[ Z_{-1}^{\ast}|_{S^{1\ast}}\oplus Z_{1}|_{S^{1}}, \] so this condition is fulfilled provided we take $Z_1$ to have the same framing class as $Z_{-1}^{\ast}$. The framing on the non-trivial base point loop in $\mathcal{L}_{0}W'$ should then also be chosen to be \[ Z_{-1}^{\ast}|_{S^{1\ast}}\oplus Z_{1}|_{S^{1}}. \] This framing can now be extended trivially over the third factor of \eqref{Eq:FibreProductOfCells}. The lemma follows. \end{proof} \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,053
{"url":"http:\/\/math.stackexchange.com\/questions\/72444\/graphing-and-differentiation","text":"# Graphing and differentiation\n\nI am having trouble making sense of how to figure out what a graph looks like just by knowing the critical numbers and the intervals the function is increasing and when it is decreasing. For example I have $(-\\infty, -1), (0,1)$ as negative and $(-1,0), (1,+\\infty)$ as positive.\n\nI know that the it can only change on critical numbers so that means if I have increase, critical number and then decreasing then I have a graph that is going up and then down and the critical number is a minimum. If I have decrease, critical number and then an increase I have a minimum.\n\nThe problem I ma having is determing how to view these. If I start from the left of the function I get concave up (minimums) with decrease, critical number, increase.\n\nIf I do the same with an increase, critical number decrease by starting from the right side I get a concave down (max).\n\nHow do I know which is correct when working with just this information? It seems to depend entirely on which side I start on.\n\n-\nI think that your question is not very clear - What is the function you are having trouble with? \u2013\u00a0 Emmad Kareem Oct 13 '11 at 23:26\nAny function really, it does not matter. I used $f(x)= x^4 -2x^2+3$ I then found the derivative and then I found the criticals numbers, -1, 0, 1. I then found the intervals where it is increasing and decreasing as noted in the original post. This is where I am having trouble telling what is a local minimum and maximum. \u2013\u00a0 user138246 Oct 13 '11 at 23:31\n\nThe sign of your function seems to change at $-1$, $0$ and $1$ rather than the direction.\n\nYou have not specified much more about your function. If it is continuous then it will have at least one local maximum in $(-1,0)$ and the direction will change there, and similarly at least one local minimum in $(0,1)$. This will not let you conclude much about where it will be concave or convex (apart from being concave up close to a local minimum etc.).\n\nI would have thought each of the red and blue functions below would meet your specifications:\n\nIf I have misunderstood and it is the direction which changes then you still cannot conclude anything about concavity and convexity, though there will be local minima at $-1$ and $1$ (at least one of which will be a global minimum) and a local maximum at $0$\n\n-\n\nTake a look at: Link-1 for a good explanation of the concept.\n\nTake a look at the graph below.\n\nYour critical numbers are -1, 0, 1 are shown.\n\nTo determine your local min\/max use the rule shown below.\n\nSecond derivative is = ${f}''(x)=12 x ^{2} - 4$\n\nSecond derivative sign at critical points:\n\nat -1 --> (+) --> local min.\n\nat 0 --> (-) --> local max.\n\nat 1 --> (+) --> local max.\n\n-","date":"2014-07-26 09:26:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8099721074104309, \"perplexity\": 325.7151719818897}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-23\/segments\/1405997900573.25\/warc\/CC-MAIN-20140722025820-00236-ip-10-33-131-23.ec2.internal.warc.gz\"}"}
null
null
OECD Home BelgiumPublications & DocumentsReports Better Regulation in Europe: Belgium The EU Better Regulation project is a partnership between the OECD and the European Commission. It draws on the initiatives for Better Regulation promoted by both organisations over the last few years. English, , 897kb OECD Strategy on Development - C-MIN(2012)6 The main goal of the Strategy is to strengthen OECD's contributions to "higher and more inclusive growth in the widest array of countries", making full use of the OECD evidence-based approaches to improve policy making and economic reform for developing and developed countries. 7-December-2011 OECD Reviews of Evaluation and Assessment in Education: School Evaluation in the Flemish Community of Belgium 2011 This report provides, for the Flemish community of Belgium, an independent analysis of major issues facing the educational evaluation and assessment framework, current policy initiatives, and possible future approaches. Fostering Innovation for Green Growth This book draws on work on green innovation across several parts of the OECD to show how it can drive sustainable growth and job creation. It explores policy actions for the deployment of new technologies and innovations as they emerge. OECD work on green growth OECD Sustainable Manufacturing Toolkit Green Growth & Eco-Innovation Eco-Innovation in Industry: Enabling Green Growth The OECD Innovation Strategy Government at a Glance 2011: Information by country These country notes contain over 50 indicators which compare the political and institutional frameworks of national governments as well as revenues and expenditures, employment, and compensation. They include a description of government policies on integrity, e-government and open government. OECD Statistics on International Trade in Services 2010, Volume II, Detailed Tables by Partner Country This OECD publication provides statistics on international trade in services by partner country for 28 OECD countries plus the European Union (EU27), the Euro area, and Hong Kong, China as well as definitions and methodological notes. The data concern trade between residents and non-residents of countries and are reported within the framework of the Manual on Statistics of International Trade in Services.This book includes summary tables of trade patterns listing the main trading partners for each country and by broad service category. Series are shown in US dollars and cover the period 2004-2008. Energy Policies of IEA Countries: Belgium 2009 The International Energy Agency's comprehensive 2010 review of Belgium's energy policies and programmes. It finds thatBelgium is making commendable progress towards a clean and sustainable energy future. Energy intensity has recently declined, as have greenhouse gas emissions. Measures have been implemented to promote energy efficiency. Public funding for energy R&D has risen substantially. Energy security measures have been reinforced for different fuels, and an integrated emergency response policy is under development. Market reforms are advancing in both the electricity and gas sectors. Belgian energy policies are playing an increasingly important role in ensuring energy security not only in the country but also in northwest Europe. The country's strategic location makes it an important transit hub for natural gas, oil and electricity.Nevertheless, challenges remain. A comprehensive, national strategy is needed to stimulate investment and adequately address energy security and climate change concerns. The Belgian position on the phase out of nuclear power should be reconsidered. The government should also try, through increased market transparency and streamlined planning procedures, to ensure that investment in new generation capacity is an attractive option for new players as well as incumbents. The overlapping responsibilities of the federal and regional governments reduce the cost-effectiveness of policies.This review analyses the energy challenges facing Belgium and provides critiques and recommendations for further policy improvements. It is intended to serve as a guide as the country continues on its way towards a more sustainable energy future. Learning for Jobs, OECD Reviews of Vocational Education and Training, Belgium (Flanders) This review of vocational education and training (VET) in Belgium (Flanders) is part of "Learning for Jobs", the OECD policy study of VET, a programme of analytical work and individual country reviews designed to help countries make their VET systems more responsive to Belgium (2010) DAC Peer Review - Main Findings and Recommendations Belgium spent USD 2.6 billion on official development assistance (ODA) in 2009, which amounted to 0.55% of its gross national income (GNI). List of Peer Reviews of DAC Members Belgium (2005), DAC Peer Review: Main Findings and Recommendations Effective Aid Management: Twelve Lessons from DAC Peer Reviews Economic Policy Reforms: Going for Growth 2010 - Belgium Country Note This note is taken from Chapter 3 of Economic Policy Reforms: Going for Growth 2010. Economic Policy Reforms: Going for Growth Homepage Economic Policy Reforms: Going for Growth 2009 << < 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 > >>
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,487
\section{Introduction} \label{sec:introduction} \IEEEPARstart{I}{n} this paper we study possible approaches to the design of electrically thin layers (sheets) which would behave as {\em perfect absorbers} for normally-incident electromagnetic plane waves. We say that absorption in a layer at some frequency is ``perfect'' or ``total'', if all incident power is dissipated in the layer. This implies that both reflection and transmission coefficients are equal to zero. In this study we will consider only the case of normal incidence, thus, this term should not be confused with the {\em perfectly matched layer} or PML, which implies zero reflection coefficient at any incident angle and for any polarization of the incident wave. The theory and design of absorbers for electromagnetic radiation has a long history and there exists a large variety of designs, especially for microwave frequencies (see, e.g. \cite{RCS, radar_absorbers2, radar_absorbers}). However, in most of these designs the absorbing structure is backed by a reflecting wall (usually a metal surface), because most often the goal is to reduce microwave reflections from metal structures. Recently, there has been considerable interest in thin absorbing layers for situations where there is no reflector behind, so that the electromagnetic properties of the object which one wants to ``hide" can be arbitrary. Naturally, a thin reflector can be incorporated in the absorber structure, but often it is desirable to allow off-band electromagnetic waves to pass through the structure or avoiding conductors is one of the application requirements. Also, for infrared and visible-light applications the use of perfect reflectors as parts of absorbing layers is not practically possible except if the use of electrically thick layers of photonic crystals is allowed in design. Electrically thin matched absorbers can be realized in many ways, but, to the best of our knowledge, only a very limited set of opportunities has been explored so far. One known possibility is to combine two thin metamaterial layers with contrasting material parameters \cite{Bilotti2006} or combine a thin resistive sheet with an array of small resonant split rings (which realize the necessary magnetic response) \cite{Bilotti2011}. A fundamental limitation on the bandwidth of metal-backed absorbers has been discussed in \cite{Rozanov}. A review of recently introduced multilayer absorbers can be found in \cite{Watts}. Here we will not consider such two- or multilayer structures, concentrating on the basic and fundamentally simplest case of a single sheet with properly designed properties. These single-layer absorbers provide ultimately thin design solutions, because the layer thickness cannot be made smaller than just one layer of particles (molecules). Conceptually, the simplest possible thin absorbing sheet is a uniform or composite layer of electrically negligible thickness (impedance sheet). In this case, the incident electric field induces an infinitesimally thin sheet of electric current in the layer, which eventually leads to dissipation of the incident power, if the layer is lossy. However, it is obvious that in this case the absorbed power can reach only one half of the incident power, and the total absorption is not possible (e.g., \cite{add1,Pozar_array}). This follows from the fact that the induced current sheet symmetrically radiates plane waves in the forward and back directions. Zero transmission coefficient implies that the amplitude of this secondary wave behind the sheet equals to that of the incident wave (so that the two waves cancel each other behind the sheet), but this means that the reflection coefficient equals unity in the amplitude. Thus, in order to enable total absorption, we need to allow also magnetic current to be induced in the layer. Strictly speaking, this implies that the layer thickness cannot be negligibly small (electrically), at least if no natural magnetics are used, but it can be still made very small compared with the wavelength. In view of practical requirements in realization of layers with desired electromagnetic response, calling for the use of composite structures, we do not model the layer as a homogeneous sheet described by some surface susceptibility or impedance, but assume from the beginning that the layer is a composite structure: a single layer of small polarizable particles. Engineering these inclusions, we can tune the reflection and transmission responses of the composite layer. Such artificial sheets with engineered electromagnetic properties are called ``metasurfaces'' or ``metasheets'', see recent reviews \cite{Holloway,Shalaev_review}. The absorber designs which we will develop here will give the required polarizabilities of individual inclusions together with the appropriate array period. In order to have full design freedom in defining how the induced electric and magnetic moments of the absorbing dipolar particles depend on the incident fields, we assume that the particles are the most general bi-anisotropic particles, possibly nonreciprocal. Here, we will consider only the case of normal plane-wave incidence. In practice, performance stability for oblique incidence is an important issue. Conventional absorbers are made of material layers of considerable electrical thickness, and this thickness is the key parameter defining the resonant frequency. Because this thickness changes when the incidence angle of the incident wave deviates from the normal, a shift in the frequency of the absorption maximum is expected. The ultimately thin absorbing layers proposed in this paper are expected to be more stable for changes of the incident angle, but a separate study is needed to understand the angular dependence of response of these new structures. The condition for total absorption of normally incident plane waves by a single infinite periodic array of electric and magnetic dipoles is known from the antenna theory \cite{Pozar_array}. Let us assume that an infinite array with the period $a$ ($a$ is smaller than the wavelength in the surrounding space) in each unit cell contains one isotropic particle in which the incident electric and magnetic fields induce electric dipole moment $\_p$ and magnetic moment $\_m$. The two moments will be orthogonal: electric moment along the incident electric field and magnetic moment along the magnetic field. Arrays of both moments will create secondary plane waves, and in the forward direction the secondary electric field amplitude reads \begin{equation} E_{\rm forward}={-j\omega \over 2S}\left(\eta_0 p +{1\over \eta_0} m\right)\end{equation} where $S=a^2$ is the unit-cell area, so that $j\omega p/S$ is the surface-averaged electric current density and $j\omega m/S$ is the magnetic current surface density. $\eta_0=\sqrt{\mu_0/\epsilon_0}$ is the wave impedance of the surrounding space. Derivation of these formulas for plane-wave fields created by planar sheets of electric and magnetic currents can be found e.g. in \cite{Felsen_M}. In the opposite direction (the reflection direction), the same induced electric and magnetic currents generate a plane wave with the amplitude \begin{equation} E_{\rm back}={-j\omega \over 2S}\left(\eta_0 p -{1\over \eta_0} m\right)\end{equation} Now, we see that it is possible to choose the moments so that the secondary field would cancel the incident field $E_{\rm inc}$ in the forward direction (zero transmission coefficient) and at the same time the secondary field would be zero in the back direction (zero reflection coefficient). Obviously, the conditions are \begin{equation} p={-jS\over \omega \eta_0}E_{\rm inc},\qquad m=\eta_0^2 p \l{ccc} \end{equation} This arrangement of electric and magnetic current sheets is in fact a Huygens surface, and for volumetric material layers that would correspond to materials with equal relative permittivity and permeability. We note in passing that the use of volumetric materials with matched wave impedance in absorbers is well known, see e.g. \cite{RCS,Sol} or \cite[ch.~12]{basic}. Thus, a simple approach to realization of total absorbing layers is to arrange electrically and magnetically polarizable particles in a dense lattice and tune the polarizabilities so that \r{ccc} are satisfied. However, this is not the only possible approach. We only need to ensure that the dipole moments have the required values, but the particles in which these dipole moments are induced can be any electrically small objects which one can describe as dipole scatterers. We expect that there should be considerable design freedom and possibilities for realizing additional practically useful properties if we do not restrict the design space by the simplest case of small electrically and magnetically polarizable scatterers (like small magnetodielectric spheres, for example). In this paper we consider planar layers formed by electrically small particles modeled by the most general linear relations between the induced dipole moments $\_p$ and $\_m$ and the local fields $\mathbf{E}_{\rm loc}$ and $\mathbf{H}_{\rm loc}$ at the positions of the particles: \begin{equation} \left[ \begin{array}{c} \mathbf{p} \\ \mathbf{m}\end{array} \right] =\left[ \begin{array}{cc} \={\alpha}_{\rm ee}& \={\alpha}_{\rm em}\\ \={\alpha}_{\rm me}& \={\alpha}_{\rm mm} \end{array} \right]\cdot \left[ \begin{array}{c} \mathbf{E}_{\rm loc} \\ \mathbf{H}_{\rm loc}\end{array} \right]. \label{eq:e1}\end{equation} Although for the desired operation of the layer the induced dipole moments must satisfy the same conditions of the Huygens sheet \r{ccc}, the actual design space is vastly larger, since we can exploit the magneto-electric coupling parameters $\={\alpha}_{\rm em}$ and $\={\alpha}_{\rm me}$ to bring the induced moments to the desired balance and required amplitudes. Furthermore, additional functionalities will become possible, as we will see in the following. While the simple and well-known solution in form of electric and magnetic dipole particles corresponds to simple magnetodielectric layers with $\epsilon_r=\mu_r$ (if we think about layers of homogeneous materials), the general case of bi-anisotropic polarizabilities of individual particles corresponds to bi-anisotropic absorbing layers. In the past, chiral absorbing layers were studied in detail \cite{chiral87,chiral89,chiral89a,chiral92,chiral96,cloete,Koschny}, but only for metal-backed volumetric layers. The use of omega coupling phenomenon for matched absorber layers was explored in \cite{basic, omega, reference2}, but also only for material layers on perfectly conducting surfaces. Recently, different kinds of absorbers have been proposed to absorb electromagnetic waves in microwave or optical spectra \cite{Sajuyigbe1,Zhou1,Korolev1,Yuan1,Jiang1,Cui1,Shvets1}. As it was mentioned, most of these absorbers are backed with a metal sheet which limits their functionality for the wave coming from the other side. These absorbers contain more than one layer of particles and they are designed so that to absorb the wave from one side while they have some uncontrollable properties for the wave coming from the other side. Here, we answer important questions: How one can realize single-layer perfect absorbers from one side of the sheet and what functionalities can be realized for waves coming from the other side? Of course, this implies that there is no metal (PEC) ground plane as a part of the absorbing structure. Here, we study the possible use of single arrays of bi-anisotropic particles of all known classes: reciprocal chiral and omega and nonreciprocal Tellegen and ``moving" particles \cite{classes,basic}. Also in this paper we consider the use of particles which have hybrid electromagnetic properties of several classes, e.g. ``moving" chiral and Tellegen omega particles. The electromagnetic coupling of the artificial omega-Tellegen particle was measured experimentally in \cite{mTellegen}. It was shown that nonreciprocal electromagnetic coupling really exists in the particle and the electromagnetic coupling coefficient is commensurate by magnitude with the electric and magnetic polarizabilities. Described implementation of the particle implies presence of external magnetic field bias (3570 Oe in \cite{mTellegen}). The used material of ferrite inclusions was yttrium iron garnet. \section{Total absorption in arrays of general bi-anisotropic particles} \subsection{Effective polarizability dyadics of particles in periodic arrays} In this paper, we consider thin absorbers for normally incident plane waves and concentrate on uniaxial structures, isotropic in the plane of the layer. This property ensures that the absorber functions for arbitrary polarized incident plane waves. The orientation of the absorbing sheet in space is defined by the unit vector $\_z_0$, orthogonal to its plane. The layer consists of an array of electrically small uniaxial particles. As discussed above, total absorption requires at least electric and magnetic dipole moments induced in the particles, and the requirement of ultimately small thickness means that higher-order multipoles are negligible. Thus, we assume that the particles are bi-anisotropic particles characterized by four dyadic polarizabilities: electric, magnetic, electromagnetic, and magnetoelectric, which relate local electromagnetic fields to the induced electric and magnetic dipole moments as in (\ref{eq:e1}). The uniaxial symmetry allows only isotopic response and rotation around the axis $\_z_0$. Thus, all the polarizabilities in (\ref{eq:e1}) take the forms: \begin{equation} \begin{array}{c} \={\alpha}_{\rm ee}=\alpha_{\rm ee}^{\rm co}\overline{\overline{I}}_{\rm t}+\alpha_{\rm ee}^{\rm cr}\overline{\overline{J}}_{\rm t},\qquad \displaystyle \={\alpha}_{\rm mm}=\alpha_{\rm mm}^{\rm co}\overline{\overline{I}}_{\rm t}+\alpha_{\rm mm}^{\rm cr}\overline{\overline{J}}_{\rm t}\\\vspace*{.1cm}\displaystyle \={\alpha}_{\rm em}=\alpha_{\rm em}^{\rm co}\overline{\overline{I}}_{\rm t}+\alpha_{\rm em}^{\rm cr}\overline{\overline{J}}_{\rm t},\qquad \displaystyle \={\alpha}_{\rm me}=\alpha_{\rm me}^{\rm co}\overline{\overline{I}}_{\rm t}+\alpha_{\rm me}^{\rm cr}\overline{\overline{J}}_{\rm t}, \end{array}\label{eq:g1} \end{equation} where indices ${\rm co}$ and ${\rm cr}$ refer to the symmetric and antisymmetric parts of the corresponding dyadics, respectively. $\overline{\overline{I}}_{\rm t}=\overline{\overline{I}}-\mathbf{z}_0\mathbf{z}_0$ is the transverse unit dyadic and $\overline{\overline{J}}_{\rm t}=\mathbf{z}_0\times\overline{\overline{I}}_{\rm t}$ is the vector-product operator. The particles are arranged in a square lattice with the unit cell of the size $a\times a$. The grid is excited by an arbitrary polarized plane wave with the electric and magnetic fields of $\mathbf{E}_{\rm inc}$ and $\mathbf{H}_{\rm inc}$, respectively, which are uniform in the array plane (normal incidence). In this situation, the induced dipole moments are the same for all particles. We assume that the grid period $a$ is smaller than the wavelength, so that no grating lobes are generated. The local fields exciting the particles are the sums of the external incident field and the interaction field caused by the induced dipole moments in other particles: \begin{equation} \begin{array}{c} \mathbf{E}_{\rm loc}=\mathbf{E}_{\rm{inc}}+\overline{\overline{\beta}}_{\rm e}\cdot\mathbf{p} \vspace*{.2cm}\\\displaystyle \mathbf{H}_{\rm loc}=\mathbf{H}_{\rm inc}+\overline{\overline{\beta}}_{\rm m}\cdot\mathbf{m} , \end{array}\label{eq:h1} \end{equation} where $\overline{\overline{\beta}}_{\rm e}$ and $\overline{\overline{\beta}}_{\rm m}$ are the interaction constants. These dyadic coefficients are proportional to the two-dimensional unit dyadic $\overline{\overline{I}}_{\rm t}$. Explicit analytical expressions for the interaction constants can be found in \cite{basic}. Equations (\ref{eq:e1}) and (\ref{eq:h1}) can be re-written as relations between the induced dipole moments and the incident fields: \begin{equation} \left[ \begin{array}{c} \mathbf{p} \\ \mathbf{m}\end{array} \right] =\left[ \begin{array}{cc} \={\widehat{\alpha}}_{\rm ee} & \={\widehat{\alpha}}_{\rm em}\\\={\widehat{\alpha}}_{\rm me}& \={\widehat{\alpha}}_{\rm mm} \end{array} \right]\cdot \left[ \begin{array}{c} \mathbf{E}_{\rm inc} \\ \mathbf{H}_{\rm inc}\end{array} \right] , \label{eq:j1} \end{equation} where the effective polarizabilities (marked by hats) include the effects of particle interactions in the array. Explicit formulas for the effective polarizabilities in terms of the individual polarizabilities and interaction constants are given in \cite{Teemu}: \begin{equation} \hspace*{-.2cm}\begin{array}{c} \={\widehat{\alpha}}_{\rm ee}\!=\!\left(\overline{\overline{I}}_{\rm t}\!-\!\={\alpha}_{\rm ee}\!\cdot\!\overline{\overline{\beta}}_{\rm e}\!-\!\={\alpha}_{\rm em}\!\cdot\!\overline{\overline{\beta}}_{\rm m}\!\cdot\!(\overline{\overline{I}}_{\rm t}\!-\!\={\alpha}_{\rm mm}\!\cdot\!\overline{\overline{\beta}}_{\rm m})^{-1}\!\cdot\!\={\alpha}_{\rm me}\!\cdot\!\overline{\overline{\beta}}_{\rm e}\right)^{-1} \vspace*{.2cm}\\\displaystyle .\left(\={\alpha}_{\rm ee}+\={\alpha}_{\rm em}\cdot\overline{\overline{\beta}}_{\rm m}\cdot(\overline{\overline{I}}_{\rm t}-\={\alpha}_{\rm mm}\cdot\overline{\overline{\beta}}_{\rm m})^{-1}\cdot\={\alpha}_{\rm me}\right) \vspace*{.4cm}\\\displaystyle \={\widehat{\alpha}}_{\rm em}\!=\!\left(\overline{\overline{I}}_{\rm t}\!-\!\={\alpha}_{\rm ee}\!\cdot\!\overline{\overline{\beta}}_{\rm e}\!-\!\={\alpha}_{\rm em}\!\cdot\!\overline{\overline{\beta}}_{\rm m}\!\cdot\!(\overline{\overline{I}}_{\rm t}\!-\!\={\alpha}_{\rm mm}\!\cdot\!\overline{\overline{\beta}}_{\rm m})^{-1}\!\cdot\!\={\alpha}_{\rm me}\!\cdot\!\overline{\overline{\beta}}_{\rm e}\right)^{-1} \vspace*{.2cm}\\\displaystyle .\left(\={\alpha}_{\rm em}+\={\alpha}_{\rm em}\cdot\overline{\overline{\beta}}_{\rm m}\cdot(\overline{\overline{I}}_{\rm t}-\={\alpha}_{\rm mm}\cdot\overline{\overline{\beta}}_{\rm m})^{-1}\cdot\={\alpha}_{\rm mm}\right) \vspace*{.4cm}\\\displaystyle \={\widehat{\alpha}}_{\rm me}\!=\!\left(\overline{\overline{I}}_{\rm t}\!-\!\={\alpha}_{\rm me}\!\cdot\!\overline{\overline{\beta}}_{\rm e}\!\cdot\!(\overline{\overline{I}}_{\rm t}\!-\!\={\alpha}_{\rm ee}\!\cdot\!\overline{\overline{\beta}}_{\rm e})^{-1}\!\cdot\!\={\alpha}_{\rm em}\!\cdot\!\overline{\overline{\beta}}_{\rm m}\!-\!\={\alpha}_{\rm mm}\!\cdot\!\overline{\overline{\beta}}_{\rm m}\right)^{-1} \vspace*{.2cm}\\\displaystyle .\left(\={\alpha}_{\rm me}+\={\alpha}_{\rm me}\cdot\overline{\overline{\beta}}_{\rm e}\cdot(\overline{\overline{I}}_{\rm t}-\={\alpha}_{\rm ee}\cdot\overline{\overline{\beta}}_{\rm e})^{-1}\cdot\={\alpha}_{\rm ee}\right) \vspace*{.4cm}\\\displaystyle \={\widehat{\alpha}}_{\rm mm}\!=\!\left(\overline{\overline{I}}_{\rm t}\!-\!\={\alpha}_{\rm me}\!\cdot\!\overline{\overline{\beta}}_{\rm e}\!\cdot\!(\overline{\overline{I}}_{\rm t}\!-\!\={\alpha}_{\rm ee}\!\cdot\!\overline{\overline{\beta}}_{\rm e})^{-1}\!\cdot\!\={\alpha}_{\rm em}\!\cdot\!\overline{\overline{\beta}}_{\rm m}\!-\!\={\alpha}_{\rm mm}\!\cdot\!\overline{\overline{\beta}}_{\rm m}\right)^{-1} \vspace*{.2cm}\\\displaystyle .\left(\={\alpha}_{\rm mm}+\={\alpha}_{\rm me}\cdot\overline{\overline{\beta}}_{\rm e}\cdot(\overline{\overline{I}}_{\rm t}-\={\alpha}_{\rm ee}\cdot\overline{\overline{\beta}}_{\rm e})^{-1}\cdot\={\alpha}_{\rm em}\right). \end{array}\label{eq:l1} \end{equation} Because the interaction constants are diagonal dyadics, the symmetry properties of the effective polarizabilities are the same as for the individual particle polarizabilities (as defined in (\ref{eq:g1})): \begin{equation} \begin{array}{c} \={\widehat{\alpha}}_{\rm ee}=\widehat{\alpha}_{\rm ee}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm ee}^{\rm cr}\overline{\overline{J}}_{\rm t},\qquad \displaystyle \={\widehat{\alpha}}_{\rm mm}=\widehat{\alpha}_{\rm mm}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm mm}^{\rm cr}\overline{\overline{J}}_{\rm t}\\\vspace*{.1cm}\displaystyle \={\widehat{\alpha}}_{\rm em}=\widehat{\alpha}_{\rm em}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm em}^{\rm cr}\overline{\overline{J}}_{\rm t},\qquad\displaystyle \={\widehat{\alpha}}_{\rm me}=\widehat{\alpha}_{\rm me}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm me}^{\rm cr}\overline{\overline{J}}_{\rm t}\displaystyle. \end{array}\label{eq:k1} \end{equation} This can be checked by substituting (\ref{eq:g1}) in (\ref{eq:l1}). \subsection{Reflection and transmission coefficients} In the theory of absorbing sheets, we will distinguish between illuminations of the sheet from its two opposite sides, along $-\mathbf{z}_0$ and $\mathbf{z}_0$. In the rest of the paper, we will use double signs for these two cases, where the top and bottom signs correspond to the incident plane wave propagating in $-\mathbf{z}_0$ and $\mathbf{z}_0$ directions, respectively. In the incident plane wave, the electric and magnetic fields satisfy \begin{equation} \mathbf{H}_{\rm inc}=\mp\frac{1}{\eta_0}\overline{\overline{J}}_{\rm t}\cdot\mathbf{E}_{\rm inc}. \label{eq:m1}\end{equation} Thus, the dipole moments in (\ref{eq:j1}) can be written as \begin{equation} \displaystyle \left[ \displaystyle\begin{array}{c} \mathbf{p} \\ \mathbf{m}\end{array} \right] =\left[\displaystyle \begin{array}{c}\displaystyle \={\widehat{\alpha}}_{\rm ee}\mp\frac{1}{\eta_0}\={\widehat{\alpha}}_{\rm em}\cdot(\mathbf{z}_0\times\overline{\overline{I}}_{\rm t})\vspace{.1cm}\vspace*{.2cm}\\\displaystyle \={\widehat{\alpha}}_{\rm me}\mp\frac{1}{\eta_0}\={\widehat{\alpha}}_{\rm mm}\cdot(\mathbf{z}_0\times\overline{\overline{I}}_{\rm t}) \end{array}\right]\cdot \begin{array}{c} \mathbf{E}_{\rm inc} \end{array}. \label{eq:n1}\end{equation} Secondary plane waves (reflected and transmitted) are generated by surface-averaged current densities \begin{equation} \displaystyle \mathbf{J}_{\rm e}=\frac{j\omega}{S}\mathbf{p}, \qquad \mathbf{J}_{\rm m}=\frac{j\omega}{S}\mathbf{m}. \label{eq:o1} \end{equation} Radiation from infinite sheets of electric and magnetic currents can be easily solved \cite{Teemu} from the Maxwell equations: \begin{equation} \begin{array}{l} \displaystyle \mathbf{E}_{\rm r}=-\frac{j\omega}{2S}\left\{\left[\eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm \widehat{\alpha}_{\rm em}^{\rm cr}\pm \widehat{\alpha}_{\rm me}^{\rm cr}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm co}\right]\overline{\overline{I}}_{\rm t}\right.\vspace*{.2cm}\\\displaystyle \hspace*{1.6cm}\left.+\left[\eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp \widehat{\alpha}_{\rm em}^{\rm co}\mp \widehat{\alpha}_{\rm me}^{\rm co}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}\right]\overline{\overline{J}}_{\rm t}\right\}\cdot\mathbf{E}_{\rm inc} \end{array}\label{eq:q1} \end{equation} \begin{equation} \begin{array}{l} \displaystyle \mathbf{E}_{\rm t}=\left\{\left[1-\frac{j\omega}{2S}\left(\eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm\widehat{\alpha}_{\rm em}^{\rm cr}\mp\widehat{\alpha}_{\rm me}^{\rm cr} +\frac{1}{\eta_0}\widehat{\alpha}_{\rm mm}^{\rm co}\right)\right]\overline{\overline{I}}_{\rm t}\right.\vspace*{.2cm}\\\displaystyle \hspace*{.3cm}\left. -\frac{j\omega}{2S} \left[\eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp\widehat{\alpha}_{\rm em}^{\rm co} \pm\widehat{\alpha}_{\rm me}^{\rm co}+\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}\right] \overline{\overline{J}}_{\rm t}\right\}\cdot\mathbf{E}_{\rm inc}. \end{array}\label{eq:s1} \end{equation} Using these general expressions for the reflected and transmitted fields from general bi-anisotropic planar arrays, we are ready to study how we can make these fields equal to zero, as required for perfect absorbers. \subsection{General conditions for total absorption} \subsubsection{Total absorption from both sides of the sheet} The definition of a perfect absorber implies that \begin{equation} \begin{array}{c} \hspace*{.5cm}\mathbf{E}_{\rm r}=0,\hspace*{.5cm}\mathbf{E}_{\rm t}=0 \end{array}\label{eq:t1}. \end{equation} Equating to zero the expressions in square brackets in (\ref{eq:q1}) and (\ref{eq:s1}), we arrive to sufficient conditions for total absorption of arbitrarily polarized incident plane waves: \begin{equation} \begin{array}{c} \displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm \widehat{\alpha}_{\rm em}^{\rm cr}\pm\widehat{\alpha}_{\rm me}^{\rm cr}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm co}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp \widehat{\alpha}_{\rm em}^{\rm co}\mp \widehat{\alpha}_{\rm me}^{\rm co}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm\widehat{\alpha}_{\rm em}^{\rm cr}\mp\widehat{\alpha}_{\rm me}^{\rm cr}+\frac{1}{\eta_0}\widehat{\alpha}_{\rm mm}^{\rm co}=\frac{2S}{j\omega} \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp\widehat{\alpha}_{\rm em}^{\rm co}\pm\widehat{\alpha}_{\rm me}^{\rm co}+\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0. \end{array}\label{eq:u1} \end{equation} Because in the expressions for the reflected and transmitted fields (\ref{eq:q1}) and (\ref{eq:s1}) the terms proportional to $\overline{\overline{I}}_{\rm t}$ and $\overline{\overline{J}}_{\rm t}$ are orthogonal, these conditions are also the necessary conditions for total absorption. The exception for the last statement is the case of circularly polarized incident waves, when these conditions are sufficient but not necessary, opening still more design possibilities if only circularly polarized waves should be absorbed totally. For circularly polarized incidence, the total absorption conditions read \begin{equation} \begin{array}{l} \displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm \widehat{\alpha}_{\rm em}^{\rm cr}\pm\widehat{\alpha}_{\rm me}^{\rm cr}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm co} \vspace*{.2cm}\\\displaystyle \hspace*{1.6cm}=(\pm j) \left[ \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp \widehat{\alpha}_{\rm em}^{\rm co}\mp \widehat{\alpha}_{\rm me}^{\rm co}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}\right] \vspace*{.4cm}\\\displaystyle 1-\frac{j\omega}{2S}\left(\eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm\widehat{\alpha}_{\rm em}^{\rm cr}\mp\widehat{\alpha}_{\rm me}^{\rm cr} +\frac{1}{\eta_0}\widehat{\alpha}_{\rm mm}^{\rm co}\right) \vspace*{.2cm}\\\displaystyle \hspace*{1.4cm}= (\mp j) \frac{j\omega}{2S} \left[\eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp\widehat{\alpha}_{\rm em}^{\rm co} \pm\widehat{\alpha}_{\rm me}^{\rm co}+\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}\right] . \end{array}\label{eq:u1CP} \end{equation} Here $\pm j$ coefficients correspond to the two orthogonal polarizations of the incident circularly polarized fields. In the following, we will use the general sufficient conditions (\ref{eq:u1}) which ensure total absorption for arbitrary polarization of the incident waves. As one can see, these conditions connect symmetric and antisymmetric parts of the electric and magnetic polarizabilities to the antisymmetric and symmetric parts of the cross-coupling polarizabilities, respectively. This is a very important point because for reciprocal particles (for example, arbitrary shaped metal or dielectric particles) the antisymmetric parts of the electric and magnetic polarizabilities are zero (e.g., \cite{basic}) which limits symmetric components of electromagnetic coupling dyadics. Furthermore, we see from (\ref{eq:u1}) that for zero reflection it is not necessary to have absorption inside the particles, while it is necessary for zero transmission (note the imaginary quantity in the right-hand side of the third equation). Let us first analyze layers which exhibit the total absorption property from both sides of the sheet. In this case, conditions (\ref{eq:u1}) should hold for both choices of the $\pm$ signs, and we find that all the magnetoelectric coefficients must vanish: \begin{equation} \widehat{\alpha}_{\rm em}^{\rm cr}=\widehat{\alpha}_{\rm me}^{\rm cr}=\widehat{\alpha}_{\rm em}^{\rm co}= \widehat{\alpha}_{\rm me}^{\rm co}=0.\end{equation} Thus, we conclude that the only possible realization of total absorbers in form of a single layer of particles is the use of electrically and magnetically polarizable uniaxial particles with the polarizabilities balanced as in a Huygens' pair: \begin{equation} \widehat{\alpha}_{\rm ee}^{\rm co}={S\over j\omega \eta_0}, \qquad \widehat{\alpha}_{\rm mm}^{\rm co}=\eta_0^2 \widehat{\alpha}_{\rm ee}^{\rm co}\l{both_sides}\end{equation} (and all the other polarizability components equal zero). The effective polarizabilities which include the effect of particle interactions in the array should be, thus, purely imaginary, corresponding to a resonance where the particles show purely absorptive properties. The relations between the collective polarizabilities and the polarizabilities of the same particles in free space (\ref{eq:l1}) in this special case simplify to \begin{equation} {1\over \widehat{\alpha}_{\rm ee}^{\rm co}}={1\over \alpha_{\rm ee}^{\rm co}}-\beta_{\rm e},\qquad {1\over \widehat{\alpha}_{\rm mm}^{\rm co}}={1\over \alpha_{\rm mm}^{\rm co}}-\beta_{\rm m} . \end{equation} Using the known expression for the interaction constants in regular dipolar arrays \cite[eq.~(4.89)]{modeboo}, we can find the required particle polarizabilities in free space: \begin{equation} {1\over \alpha_{\rm ee}^{\rm co}}={\rm Re}(\beta_{\rm e})+j{k^3\over 6\pi\epsilon_0}+j{\omega \eta_0 \over 2 S},\end{equation} \begin{equation} {1\over \alpha_{\rm mm}^{\rm co}}={\rm Re}(\beta_{\rm m})+j{k^3\over 6\pi\mu_0}+j{\omega \over 2S\eta_0}.\end{equation} We again see that the reactive response of the individual particles should be such that together with the reactive part of the interaction field a resonance condition is satisfied. We can also check that the amplitude of the secondary plane waves created by the two dipolar arrays of the perfect absorber equal to one half of the incident field amplitude: \begin{equation} E_{\rm sc}=-{\eta_0\over 2}{j\omega p\over S}=-{\eta_0\over 2}{j\omega \over S}\widehat{\alpha}_{\rm ee}^{\rm co} E_{\rm inc}=-{1\over 2}E_{\rm inc}\end{equation} (we have substituted $\widehat{\alpha}_{\rm ee}^{\rm co}$ from \r{both_sides}). The field created by the magnetic-dipole array has the same amplitude. In the forward directions, the sum of these two plane waves compensates the incident field, and in the reflection direction these two plane waves are out of phase and the sum is zero. \subsubsection{Total absorption from one side of the sheet} Next, we consider single-layer sheets which work as total absorbers only from one side and study what functionalities can be engineered for illumination from the opposite side. From (\ref{eq:u1}), we know that the presence of cross-coupling polarizabilities (as well as the anti-symmetric parts of the electric and magnetic polarizabilities) causes asymmetry in interactions of the sheet with incident waves coming from the opposite directions. Let us assume that we satisfy (\ref{eq:u1}) for waves incident from one of the two sides. This corresponds to conditions \begin{equation} \begin{array}{c} \displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm co}=\mp ( \widehat{\alpha}_{\rm em}^{\rm cr}+\widehat{\alpha}_{\rm me}^{\rm cr}) \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=\pm (\widehat{\alpha}_{\rm em}^{\rm co}+ \widehat{\alpha}_{\rm me}^{\rm co}) \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}+\frac{1}{\eta_0}\widehat{\alpha}_{\rm mm}^{\rm co}= \frac{2S}{j\omega}\mp (\widehat{\alpha}_{\rm em}^{\rm cr}-\widehat{\alpha}_{\rm me}^{\rm cr}) \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}+\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=\pm (\widehat{\alpha}_{\rm em}^{\rm co}-\widehat{\alpha}_{\rm me}^{\rm co}). \end{array}\label{eq:teem5} \end{equation} Here, as above, the upper sign corresponds to $-\_z_0$ directed and the lower sign to the oppositely-directed incident plane waves. Using the same conditions for total absorption for the opposite incidence direction (taking the lower signs in (\ref{eq:u1}), (\ref{eq:q1}), and (\ref{eq:s1})), we find the reflected and transmitted electric fields for the same sheet when the incidence is from the other side: \begin{equation} \begin{array}{l} \displaystyle \mathbf{E}_{\rm r}=\frac{j\omega}{S}\left\{\pm \left[ \widehat{\alpha}_{\rm em}^{\rm cr}+\widehat{\alpha}_{\rm me}^{\rm cr}\right]\overline{\overline{I}}_{\rm t}\mp\left[ \widehat{\alpha}_{\rm em}^{\rm co}+ \widehat{\alpha}_{\rm me}^{\rm co}\right]\overline{\overline{J}}_{\rm t}\right\}\cdot\mathbf{E}_{\rm inc} \end{array} \label{eq:teem6} \end{equation} \begin{equation} \begin{array}{l} \displaystyle \mathbf{E}_{\rm t}=\frac{j\omega}{S}\left\{\pm \left[ \widehat{\alpha}_{\rm em}^{\rm cr}-\widehat{\alpha}_{\rm me}^{\rm cr}\right]\overline{\overline{I}}_{\rm t}\mp\left[ \widehat{\alpha}_{\rm em}^{\rm co}- \widehat{\alpha}_{\rm me}^{\rm co}\right]\overline{\overline{J}}_{\rm t}\right\}\cdot\mathbf{E}_{\rm inc}. \end{array}\label{eq:teem7} \end{equation} These equations show that tuning the layer to act as a perfect absorber from one side, it is possible to realize some special properties (in reflection and transmission) from the other side. To study these properties, we begin with the case of reciprocal structures. In this case, the electric and magnetic polarizabilties are symmetric dyadics ($\widehat{\alpha}_{\rm ee}^{\rm cr}=0$, $\widehat{\alpha}_{\rm mm}^{\rm cr}=0$) and the fields coupling coefficients satisfy \begin{equation} \widehat{\alpha}_{\rm em}^{\rm co}=-\widehat{\alpha}_{\rm me}^{\rm co},\qquad \widehat{\alpha}_{\rm em}^{\rm cr}=\widehat{\alpha}_{\rm me}^{\rm cr},\end{equation} corresponding to chiral and omega couplings \cite{basic}. Due to the reciprocity, the transmission coefficient is zero for waves incident from both sides. The second equation in (\ref{eq:teem5}) is satisfied identically, and from the last one, we see that the chirality parameter $\widehat{\alpha}_{\rm me}^{\rm co}=0$. Thus, if the sheet is tuned to work as a perfect absorber from one side, the chirality parameter is zero and there is no possibility to tune the reflection properties from the opposite side introducing chirality. On the other hand, the omega coupling coefficient $\widehat{\alpha}_{\rm me}^{\rm cr}$ is not fixed by the total absorption condition on one side, because from the first and third equations in (\ref{eq:teem5}) we find \begin{equation} \widehat{\alpha}_{\rm ee}^{\rm co}={S\over j\omega \eta_0}\mp {1\over \eta_0}\widehat{\alpha}_{\rm me}^{\rm cr} \l{om1}\end{equation} \begin{equation} \widehat{\alpha}_{\rm mm}^{\rm co}={\eta_0 S\over j\omega }\pm \eta_0\widehat{\alpha}_{\rm me}^{\rm cr}.\l{om2} \end{equation} Comparing with \r{both_sides}, we see that introducing omega coupling, we can maintain the property of total absorption from one of the sides with relaxed requirements on the electric and magnetic polarizabilities. For instance, we can possibly engineer the omega coupling parameter $\widehat{\alpha}_{\rm me}^{\rm cr}$ so that the required magnetic polarizability is much smaller than that dictated by \r{both_sides}. The reflection coefficient from the side opposite to the matched one we find from (\ref{eq:teem6}): \begin{equation} \mathbf{E}_{\rm r}=\pm \frac{2j\omega}{S} \widehat{\alpha}_{\rm em}^{\rm cr}\overline{\overline{I}}_{\rm t}\cdot\mathbf{E}_{\rm inc}. \l{omega_rotation}\end{equation} Thus, varying the omega coupling parameter, we can control the co-polarized reflection from the opposite side of the sheet, maintaining the matching and total absorption properties from one side. More functionalities become available if we allow nonreciprocal response of the particles. For simplicity, let us concentrate here on the cases where the magnetoelectric coupling is only due to nonreciprocity, assuming that the chirality and omega coupling coefficients are zero (the effects of chirality and omega coupling have been considered above). In these cases, the coupling coefficients satisfy \begin{equation} \widehat{\alpha}_{\rm em}^{\rm co}=\widehat{\alpha}_{\rm me}^{\rm co},\qquad \widehat{\alpha}_{\rm em}^{\rm cr}=-\widehat{\alpha}_{\rm me}^{\rm cr},\end{equation} corresponding to Tellegen and ``moving'' particles, respectively \cite{classes,basic}. From the first equation in (\ref{eq:teem5}), we find that $\eta_0\widehat{\alpha}_{\rm ee}^{\rm co}=\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm co}$ (Huygens' relation). From this and the third relation we get \begin{equation} \widehat{\alpha}_{\rm ee}^{\rm co}= {S\over j\omega \eta_0}\mp {1\over \eta_0}\widehat{\alpha}_{\rm em}^{\rm cr}.\l{mov_co_cr}\end{equation} The second and the last relations in (\ref{eq:teem5}) connect the anti-symmetric parts of the electric and magnetic polarizabilities with the Tellegen parameter: \begin{equation} \widehat{\alpha}_{\rm ee}^{\rm cr}=\pm {1\over \eta_0} \widehat{\alpha}_{\rm em}^{\rm co},\qquad \widehat{\alpha}_{\rm mm}^{\rm cr}=\mp \eta_0 \widehat{\alpha}_{\rm em}^{\rm co}.\end{equation} Thus, if the Tellegen coupling is present, its effects should be balanced with the nonreciprocity in both electric and magnetic polarizabilities. Tellegen coupling allows control of the reflection coefficient from the opposite side, since \begin{equation} \mathbf{E}_{\rm r}=\mp \frac{2j\omega}{S} \widehat{\alpha}_{\rm em}^{\rm co}\overline{\overline{J}}_{\rm t}\cdot\mathbf{E}_{\rm inc}.\end{equation} We see that the Tellegen sheet can be designed to work as a perfect absorber from one side and a twist polarizer in reflection from the other side. Finally, the antisymmetric part of the nonreciprocal coupling coefficient allows to control the transmission coefficient from the opposite side (for nonreciprocal sheets the transmission coefficient is not anymore necessarily symmetric): \begin{equation} \mathbf{E}_{\rm t}=\pm \frac{2j\omega}{S} \widehat{\alpha}_{\rm em}^{\rm cr}\overline{\overline{I}}_{\rm t}\cdot\mathbf{E}_{\rm inc}.\end{equation} This transmission coefficient equals unity if $\widehat{\alpha}_{\rm em}^{\rm cr}=\pm S/(2j\omega)$, in which case equation \r{mov_co_cr} shows that all the polarizabilities are in balance: \begin{equation} \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}=\pm\widehat{\alpha}_{\rm em}^{\rm cr}=\frac{1}{\eta_0}\widehat{\alpha}_{\rm mm}^{\rm co}=\frac{S}{j2\omega}. \end{equation} Using (\ref{eq:l1}), it is easy to show that the polarizabilities for each individual particle should also be in balance and equal to \begin{equation} \displaystyle\eta_0\alpha_{\rm ee}^{\rm co}=\pm\alpha_{\rm em}^{\rm cr}=\frac{1}{\eta_0}\alpha_{\rm mm}^{\rm co}={\eta_0\over 2}\displaystyle\frac{1}{\displaystyle \frac{j\omega\eta_0}{S}+\beta_{\rm e}}. \end{equation} We see that the required electric and magnetic effective polarizabilities are twice as small as compared to the simple case of isotropic dipole particles \r{both_sides}. However, the resulting amplitudes of the induced dipole moments and the amplitudes of the secondary plane waves are the same, because both moments are generated by both incident fields. If this nonreciprocal array is excited from the absorbing side, these secondary plane waves cancel the incident wave behind the sheet and they cancel each other in the reflection direction, same as for the simple isotropic array. But for the excitation from the opposite side, the induced dipole moments are zero, because contributions due to the applied electric and magnetic fields cancel out, and the sheet is transparent. We can conclude that this interesting structure has the property of the ultimately thin (a single layer of dipole particles) isolator: from one side it acts as a total absorber while from the other side, the sheet is transparent. Moreover, it appears that this is the only possible configuration having this property. \section{Uniaxial bi-anisotropic particles as components of totally absorbing arrays} Next we will discuss some possible designs of bi-anisotropic particles with the properties required for single-layer perfect absorbers. From the reciprocal classes, the most interesting and practically useful property is the omega coupling, since this effect gives flexibility in the requirements on the electric and magnetic polarizabilities and allows control over the reflection coefficient from the back side of the absorbing sheet (see \r{om1}--\r{omega_rotation}). \subsection{Wire omega particles} The classical topology of bi-anisotropic particles with omega coupling is an $\Omega$-shaped particle \cite{Saad,proposed,basic}. It is clear that for a single uniaxial omega particle made of a conducting wire (see picture in Table~\ref{ta:load_values}), in approximation of electrically small particles, polarizabilities are such that \cite{basic}, \begin{equation} \displaystyle \alpha_{\rm ee}^{\rm co}\alpha_{\rm mm}^{\rm co}=-\alpha_{\rm em}^{\rm cr}\alpha_{\rm me}^{\rm cr}=-(\alpha_{\rm em}^{\rm cr})^2. \label{eq:om4} \vspace*{.2cm}\\\displaystyle \end{equation} \begin{table*}[!t] \centering \caption{Conditions for perfect absorption} \begin{tabular}{|p{50mm}|p{50mm}|p{50mm}|} \hline \rowcolor[gray]{.9} \multicolumn{3}{|c|}{{\bf Condition for total absorption}} \\ \hline \rowcolor[gray]{.9} Wire Omega & Omega---Tellegen & Chiral---Moving \\ \hline \vspace{0.5mm} \includegraphics[width=0.3\textwidth]{omega} \hspace*{1.4cm} $ \displaystyle \widehat{\alpha}_{\rm ee}^{\rm co}=-\frac{1}{\eta_0^2}\widehat{\alpha}_{\rm mm}^{\rm co}$ & \vspace{0.5mm} \includegraphics[width=0.3\textwidth]{tellegen} \hspace*{.1cm} $ \begin{array}{c} \displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm 2\widehat{\alpha}_{\rm em}^{\rm cr}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm co}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp 2\widehat{\alpha}_{\rm em}^{\rm co}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}+\frac{1}{\eta_0}\widehat{\alpha}_{\rm mm}^{\rm co}=\frac{2S}{j\omega} \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}+\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0 \end{array}$ & \vspace{0.5mm} \includegraphics[width=0.3\textwidth]{moving} $ \begin{array}{c} \displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm co}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm2\widehat{\alpha}_{\rm em}^{\rm cr}+\frac{1}{\eta_0}\widehat{\alpha}_{\rm mm}^{\rm co}=\frac{2S}{j\omega} \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp2\widehat{\alpha}_{\rm em}^{\rm co}+\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0\vspace*{.5cm} \end{array}$ \\ \hline \multicolumn{3}{|c|}{{\bf Reflected and transmitted fields from the other side of a single-sided perfect absorber}} \\ \hline \rowcolor[gray]{.9} Omega & Tellegen & Moving \\ \hline \vspace*{.1cm}\hspace*{.6cm} $ \begin{array}{c} \displaystyle\hspace*{-.2cm} \mathbf{E}_{\rm r}=\pm \frac{2j\omega}{S} \widehat{\alpha}_{\rm em}^{\rm cr}\overline{\overline{I}}_{\rm t}\cdot\mathbf{E}_{\rm inc} \vspace*{.5cm}\\\displaystyle\hspace*{-.2cm} \mathbf{E}_{\rm t}=0\vspace*{.5cm} \end{array}$ & \vspace*{.1cm}\hspace*{.6cm} $ \begin{array}{c} \displaystyle\hspace*{-.2cm} \mathbf{E}_{\rm r}=\mp \frac{2j\omega}{S} \widehat{\alpha}_{\rm em}^{\rm co}\overline{\overline{J}}_{\rm t}\cdot\mathbf{E}_{\rm inc} \vspace*{.5cm}\\\displaystyle\hspace*{-.2cm} \mathbf{E}_{\rm t}=0\vspace*{.5cm} \end{array} $ & \vspace*{.1cm}\hspace*{.6cm} $ \begin{array}{c} \displaystyle\hspace*{-.2cm} \mathbf{E}_{\rm r}=0\vspace*{.5cm}\\\displaystyle \mathbf{E}_{\rm t}=\pm \frac{2j\omega}{S} \widehat{\alpha}_{\rm em}^{\rm cr}\overline{\overline{I}}_{\rm t}\cdot\mathbf{E}_{\rm inc}\vspace*{.5cm} \end{array} $ \\ \hline \end{tabular} \label{ta:load_values} \end{table*} This condition is a limitation on electromagnetic properties of a wire omega particle. Using (\ref{eq:l1}), we find that the effective polarizabilities of omega particles forming a periodical array satisfy the same relation \begin{equation} \displaystyle \widehat{\alpha}_{\rm ee}^{\rm co}\widehat{\alpha}_{\rm mm}^{\rm co}=-(\widehat{\alpha}_{\rm em}^{\rm cr})^2. \label{eq:om5} \vspace*{.2cm}\\\displaystyle \end{equation} Let us consider the limitation (\ref{eq:om5}) together with the first condition for total absorption in (\ref{eq:teem5}) (which is the condition for zero reflection from an array of omega particles). Combining these two equations, we get \begin{equation} \displaystyle \widehat{\alpha}_{\rm ee}^{\rm co}\pm \frac{2 j}{\eta_0}\sqrt{\widehat{\alpha}_{\rm ee}^{\rm co} \widehat{\alpha}_{\rm mm}^{\rm co}}-\frac{1}{\eta_0^2}\widehat{\alpha}_{\rm mm}^{\rm co}=0. \label{eq:om6} \vspace*{.2cm}\\\displaystyle \end{equation} From this simple quadratic equation one can obtain the relation \begin{equation} \displaystyle \widehat{\alpha}_{\rm ee}^{\rm co}=-\frac{1}{\eta_0^2}\widehat{\alpha}_{\rm mm}^{\rm co} \label{eq:om7} \end{equation} and using (\ref{eq:l1}), we get the same relation between the polarizabilities of individual particles in free space $\left(\alpha_{\rm ee}^{\rm co}=-\frac{1}{\eta_0^2}\alpha_{\rm mm}^{\rm co}\right)$, but this equation does not hold for passive omega particles, because the different signs of the imaginary parts of the electric and magnetic polarizabilities mean that the particle should be active. On the other hand, it is impossible to satisfy the third condition from (\ref{eq:teem5}) taking the limitation of (\ref{eq:om7}) into account. Therefore, a wire omega particles cannot be used for the design of perfect absorbers of this type. This is an interesting fact because earlier nearly-total absorption was predicted in structures which behave like omega particles \cite{mohammad}. However, there is a significant difference between the case studied in \cite{mohammad} and wire omega particles. For a wire omega particle all the polarizabilities have the same resonance frequency, but for the structure in \cite{mohammad}, one can tune the structural parameters so that different polarizabilities have different resonance frequencies. Thus, it appears possible to break the limitation of (\ref{eq:om7}) using other kinds of omega particles and achieve total absorption with the help of the omega-coupling phenomenon. \subsection{Omega-Tellegen particles} Within the nonreciprocal classes, the most interesting properties are the possibilities offered by nonreciprocal field coupling phenomena in array of particles. Realization of such particles requires inclusions of some nonreciprocal elements. The known structures for the microwave frequency range \cite{basic} include magnetized ferrite spheres coupled to specially shaped metal elements, see illustrations in Table~1. However, both these structures exhibit also reciprocal field coupling effects in addition to the desired nonreciprocal effects. A single uniaxial Tellegen particle shows also some omega field coupling due to the asymmetrical position of the metal strips with respect to the center of the ferrite sphere. For this reason, we call it omega-Tellegen particle. Its polarizability dyadics have the form \begin{equation} \left\{\begin{array}{l} \displaystyle \={\widehat{\alpha}}_{\rm ee}=\widehat{\alpha}_{\rm ee}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm ee}^{\rm cr}\overline{\overline{J}}_{\rm t} \vspace*{.2cm}\\\displaystyle \={\widehat{\alpha}}_{\rm mm}=\widehat{\alpha}_{\rm mm}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm mm}^{\rm cr}\overline{\overline{J}}_{\rm t} \vspace*{.2cm}\\\displaystyle \={\widehat{\alpha}}_{\rm em}=\={\widehat{\alpha}}_{\rm me}=\widehat{\alpha}_{\rm em}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm em}^{\rm cr}\overline{\overline{J}}_{\rm t}. \end{array}\right. \label{eq:te3} \end{equation} Using relations (\ref{eq:u1}) and (\ref{eq:te3}), we get the following conditions for total absorption in omega-Tellegen arrays \begin{equation} \begin{array}{c} \displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm 2\widehat{\alpha}_{\rm em}^{\rm cr}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm co}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp 2\widehat{\alpha}_{\rm em}^{\rm co}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}+\frac{1}{\eta_0}\widehat{\alpha}_{\rm mm}^{\rm co}=\frac{2S}{j\omega} \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}+\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0. \end{array}\label{eq:te4} \end{equation} This shows that if we want to use the advantages offered by Tellegen coupling, we need to design the particle so that the omega coupling coefficient $\widehat{\alpha}_{\rm em}^{\rm cr}$ is properly balanced with the electric and magnetic polarizabilities. \subsection{Chiral-moving particles} Likewise, the known artificial moving particle \cite{basic,mov1,mov2} (picture in Table~1) exhibits reciprocal magnetoelectric coupling because of its chiral shape. The properties of such particle can be modeled by the polarizability dyadics of the form \begin{equation} \left\{\begin{array}{l} \displaystyle \={\widehat{\alpha}}_{\rm ee}=\widehat{\alpha}_{\rm ee}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm ee}^{\rm cr}\overline{\overline{J}}_{\rm t} \vspace*{.2cm}\\\displaystyle \={\widehat{\alpha}}_{\rm mm}=\widehat{\alpha}_{\rm mm}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm mm}^{\rm cr}\overline{\overline{J}}_{\rm t} \vspace*{.2cm}\\\displaystyle \={\widehat{\alpha}}_{\rm em}=-\={\widehat{\alpha}}_{\rm me}=\widehat{\alpha}_{\rm em}^{\rm co}\overline{\overline{I}}_{\rm t}+\widehat{\alpha}_{\rm em}^{\rm cr}\overline{\overline{J}}_{\rm t}. \end{array}\right. \label{eq:mo3} \end{equation} Using the relations (\ref{eq:u1}) and (\ref{eq:mo3}), the conditions for total absorption in the chiral-moving slab read \begin{equation} \begin{array}{c} \displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm co}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}-\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0 \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm co}\pm2\widehat{\alpha}_{\rm em}^{\rm cr}+\frac{1}{\eta_0}\widehat{\alpha}_{\rm mm}^{\rm co}=\frac{2S}{j\omega} \vspace*{.2cm}\\\displaystyle \eta_0\widehat{\alpha}_{\rm ee}^{\rm cr}\mp2\widehat{\alpha}_{\rm em}^{\rm co}+\frac{1}{\eta_0} \widehat{\alpha}_{\rm mm}^{\rm cr}=0. \end{array}\label{eq:mo4} \end{equation} In this case, the chirality parameter $\widehat{\alpha}_{\rm em}^{\rm co}$ should be balanced with the anti-symmetric parts of the electric and magnetic polarizabilities. Implementation of total-absorption arrays using omega-Tellegen or chiral-moving particles presents significant difficulties. To the best of our knowledge, there are no analytical models to calculate the individual polarizabilities of these particles. The relations between the effective and individual polarizabilities for these particles become more involved. Finally, the design presupposes the use of ferrites and magnetic field bias which presents practical difficulties. On the other hand, these topologies offer unique properties such as a thin sheet operating as an isolator, and they clearly deserve further studies. \section{Conclusions} We have considered possible approaches for realization of perfect absorbers using ultimately thin (single layers of particles) structures. The thickness cannot be strictly zero (in the electromagnetic sense) because we must allow magnetic response in the layer. We have demonstrated that to realize total absorption from both sides of the sheet, we just need to realize balanced electric and magnetic polarizabilities and all magnetoelectric polarizabilities should be zero. Further, we have considered single-layer sheets which operate as perfect absorbers only when illuminated from one of the two sides of the sheet and studied what functionalities can be engineered for illumination from the opposite side. We have shown that introducing omega coupling in the constituent particles makes it possible to realize a layer which acts as a perfect absorber from one side with controllable co-polarized reflection from the opposite side of the sheet. For reciprocal structures, it has been shown that tuning the layer to act as a perfect absorber from one side does not allow to have any chirality in the layer. We have seen that allowing nonreciprocity in the properties of the absorbing particles offers possibilities for more functionalities. A Tellegen sheet can be designed to work as a perfect absorber from one side and a twist polarizer in reflection from the other side. The antisymmetric part of the nonreciprocal coupling coefficient (i.e., a layer of particles with the constitutive parameters of an artificial moving medium) makes it possible to achieve total absorption from one side, but controlled transmission coefficient from the opposite side. In particular, the regime when the layer acts as a perfect absorber from one side, while from the other side the sheet is transparent, is possible. This corresponds to the ultimately thin isolator. Finally, we have studied some particular examples of possible realizations of single-layer perfect absorbers with the use of omega, omega-Tellegen, and chiral-moving particles as canonical examples of uniaxial bi-anisotropic particles. The most interesting properties are offered by nonreciprocal bi-anisotropic particles. They can be realized in practice as near-field coupled magnetized ferrite inclusions and metal strips of wires, as was proposed in \cite{classes} (see pictures in Table~\ref{ta:load_values}). Now we are developing analytical models of omega-Tellegen and chiral moving particles which are expected to enable design and optimization of inclusions for proposed nonreciprocal absorbing layers.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,978
\section{Introduction} \label{sec:intro} The availability of asteroseismic data has changed the way we determine the properties of stars. These data allow us to determine the radii, masses and ages of stars to an unprecedented degree of precision \citep[see e.g.,][]{chaplin2014, serenelli2017, victor2015, victor2017, pin2014, pin2018}. While reasonable results are obtained when only average seismic data --- the large separation $\Delta\nu$\ and the frequency of maximum power $\nu_{\rm max}$\ --- are available the best results are obtained when data on individual frequencies are available. Like most of astrophysics, analysis of asteroseismic data to determine stellar properties is done by comparing the properties of the models to the observed properties of the star under consideration. In this case we compare the frequencies of the models that also have the same effective temperature and metallicity as the star. There is however, a complication in this, and that is the so-called ``surface term''. The surface term is a frequency-dependent frequency difference between stellar models and stars that arises from our inability to model the near-surface layers of a star properly \citep{brown1984, jcdetal1988}. The surface term can also be seen between models constructed with different surface physics \citep[e.g.][and references therein]{basu2016}. For modes of degree $l \lesssim 200$, the surface term is independent of the degree of the mode \citep{basu1996}. Asteroseismic analyses have to take the surface term into account somehow. Analyses done only with average asteroseismic properties $\Delta\nu$\ and $\nu_{\rm max}$\ generally ignore the surface term \citep[e.g., the analyses of][]{chaplin2014, pin2014}, though recent analyses assume that the surface term contribution to $\Delta\nu$\ scales as the surface term for the Sun \citep{serenelli2017}. For analyses where individual modes were used \citep[e.g.][]{metcalfe2010, metcalfe2012, victor2015, victor2017}, a variety of different ways have been used to remove the effects of the surface term. The aim of this paper is to examine whether the global properties of a star --- mass, radius, and age --- are affected by how the surface term is handled. For this we analyze solar frequencies degraded to stellar levels, the frequencies of 16Cyg~A and B, as well as those of KIC~6106415 (HD~177153), KIC~6225718 (HD~187637), and KIC~8006161 (HIP~91949) using a variety of different methods. We should note that our investigation is limited to studying the effects of the surface term on the result. It is known that the results, in particular, that for age, can change depending on the physics of the models \citep[see e.g.,][]{lebreton2014b}; mass and radius results are much more robust to changes in physics \citep{ning, victor2015, victor2017}. It is difficult to gauge the effect of the surface term on results from previous analyses such as those of \citet{metcalfe2010}, \citet{victor2015}, \citet{victor2017} etc., since the use of different surface terms was usually combined with the use of different physics inputs in the models. The rest of the paper is organized as follows: In Section~\ref{sec:surf} we discuss the surface term in more detail and discuss some of the common ways of accounting for it while making asteroseismic estimates of stellar properties. In Section~\ref{sec:method} we describe the analysis techniques in detail and list the different ways we determined the global stellar parameters. We describe the stars and their data in Section~\ref{sec:stars}. We describe how we constructed models and the parameter ranges used in Section~\ref{sec:models}. We present our results in Section~\ref{sec:res}. Our conclusions are stated in Section~\ref{sec:conclu}. \section{The surface term: sources and mitigation}. \label{sec:surf} The surface term plagues helioseismic analyses too. We show the surface term between a standard solar model (SSM) and the Sun in Fig.~\ref{fig:ssmsurf}. The surface term arises from a number of factors, but chief among them is the treatment of convection in models. Usual one-dimensional stellar models treat convection in a very simplified manner using approximations of like the mixing-length theory (MLT; \citealt{bohmvitense1958}) or variants thereof (e.g., \citealt{cm1991}, \citealt{arnett2010}). While these approximations work well in the deeper parts of the convection zone where convection is efficient, they do not model the near-surface superadiabatic layer very well. Also missing in these models are dynamical effects of convection, in particular pressure support provided by turbulence --- turbulent pressure can be as high as 15\% of the gas pressure in the superadiabatic regions (\citealt{stein1998}, \citealt{nord1997}, \citealt{tanneretal2013}). Other limitations of near-surface modelling include the use of very simple models of stellar atmospheres, as well as uncertainties in microphysics inputs such as low-temperature opacities. Another contribution to the surface term comes from the fact that the adiabatic approximation for treating stellar oscillation frequencies breaks down near the surface; frequencies are usually calculated assume that the linear adiabatic wave equations apply, while there are codes (e.g., Gyre; \citealt{gyre}) that can determine non-adiabatic frequencies, they do not usually include the effect of convective heating and cooling on oscillation modes. All factors mentioned above affect very shallow layers of stars, and it can be shown from the theory of oscillations that perturbations to near-surface layers cause a frequency-dependent frequency change that is a smooth function of frequency that can be represented as a low-degree polynomial (see Fig. 3.17 of \citealt{mybook}). \begin{figure} \includegraphics[width=2.75 true in]{fig1.eps} \caption{The frequency differences between the Sun and standard solar model BS05(OP) of \citet{jnb2005}. The solar frequencies are those from the Birmingham Solar Oscillation Network (BiSON) published by \citet{wjc2007}. The vertical gray lines mark the frequency range of modes that could be obtained for the Sun by a mission like {\it Kepler}.} \label{fig:ssmsurf} \end{figure} There have been several concerted efforts to improve { the near-surface structure of models} in order to decrease the surface term. \citet{rosenthal1995} showed that solar models constructed with non-local mixing length formulation of \citet{dog1977} as formulated by \citet{balmforth} reduces the surface term with respect to the Sun. \citet{demarque1997} showed that parametrizing the structure of the super-adiabatic layer in simulations and applying that to solar models improves frequencies. Other efforts include patching the results of realistic convection simulations on stellar models \citep[e.g.][]{rosenthal, piau2014, joergensen2018}. \citet{houdek2017} did a detailed study of the physics ingredients that contribute to the surface term, and they confirmed that there are multiple sources, including effects from convection dynamics; they show that the surface term can indeed be made negligibly small. There work, however, was done with solar envelope models and therefore, do not easily translate to models along an evolutionary sequence. As a step in this direction, \citep{mosumgaard2018} have been examining models constructed with near-surface properties obtained by interpolating withing grids of convection simulations. However, these models still show a surface term, probably because \citet{mosumgaard2018} neglected turbulent pressure. { However, it is not enough to include turbulent pressure in the models; \citet{sonoi} showed that to achieve the best results, perturbations to turbulent pressure must be taken into account properly when frequencies of such models are calculated.} Thus even as there are ongoing efforts to improve the near-surface properties of stellar models, the vast majority of stellar models used have a substantial surface term that needs to be accounted for properly. For helioseismic studies, where many hundreds of modes are available, the problem of the surface term is handled by relying on frequency inversions rather than frequency comparisons (see \citealt{basu2016} and references therein). The inversion process explicitly accounts for the surface term \citep[e.g.,][]{dziem1990, dziem1991, dappen1991, ab94} and thus does not depend on correcting the frequencies. However, the dearth of asteroseismic data (tens of frequencies rather than hundreds) along with a lack of independent constraints on mass, radius and age, make inversions impractical for most stars. As a result, other strategies are required; these involve either subtracting out the surface term, making it less effective, or using a different observable related to the frequencies. The most common way of removing the effect of the surface term is to assume that it has a known functional form and subtract it our from the frequency differences between a star and its model. Most early analyses used the surface term correction proposed by \citet{hans}. That form assumes that the frequency difference caused by near-surface effects can be expressed as a power-law in frequency, with the exponent determined from a solar surface term. While this form is easy to use, it depends on the surface term between the Sun and a solar model, and that depends on the physics of the solar model. Additionally, it can be shown (\citealt{joey}) that this form does not work very well for frequencies far away from $\nu_{\rm max}$, nor does it work { too well} for models of stars more evolved that the Sun. As a result, it is increasingly common to use the form proposed by \citet{ball2014} based on an idea of \citet{dog1990}. They proposed that the surface term could be fit to the form \begin{equation} \delta\nu_{nl} =\nu_{nl}^{\rm obs}-\nu_{nl}^{\rm model}=\frac{1}{I_{nl}}\left[ a\left(\frac{\nu_{nl}}{\nu_{\rm ac}}\right)^{-1} +b\left(\frac{\nu}{\nu_{\rm ac}}\right)^3\right], \label{eq:bl} \end{equation} where $\delta\nu_{nl}$ is the difference in frequency for a mode of degree $l$ order $n$ between a star and its model, $\nu_{nl}$ is the frequency and $I_{nl}$ is the inertia of the mode, and $\nu_{\rm ac}$ is the acoustic cut-off frequency. The coefficients $a$ and $b$ can be determined through a generalized linear least-squares fit \citep[see][]{rachel2017}. The biggest advantage of this form is that one does not have to rely on a solar model to fix the parameters. The explicit dependence on mode inertia also implies that no special treatment is needed to to use the correction on non-radial modes. This correction works quite well in general, and we show this in the left column of Fig.~\ref{fig:sunech} where we have corrected the frequencies of the standard solar model shown in Fig.~\ref{fig:ssmsurf}; for this purpose we used set of solar frequencies degraded to match what is obtained for other stars with {\it Kepler}. Both corrected and uncorrected frequencies are shown for comparison. \begin{figure*} \centerline{ \includegraphics[width=3.00 true in]{fig2a.eps} \includegraphics[width=3.00 true in]{fig2b.eps} } \caption{Echelle diagram showing solar frequencies (black/gray) as well as the uncorrected (blue) and corrected (red) frequencies of the SSM BS05(OP) of \citet{jnb2005} (left), and the best-fit model obtained using the correction in Eq.~\ref{eq:bl} (right). The solar frequencies used are BiSON data degraded to stellar levels by \citet{lund2017} and used by \citet{victor2017}. } \label{fig:sunech} \end{figure*} The problem with Eq.~\ref{eq:bl} is that it can over-correct the frequencies and remove differences that are caused by differences in the structure of deeper layers. We show this in the right column of Fig.~\ref{fig:sunech}, where the corrected and uncorrected frequencies of a model with $M/M_\odot=1.0142$, $R/R_\odot=1.01$, $T_{\rm eff}=5779$ K and and age of 4.87~Gyr are plotted; we refer to this model as the ``best-fit''model. The corrected frequencies appear to fit the solar data better than those of the standard solar model, however the internal structure of best-fit model has a much larger mismatch with the Sun than the SSM. This is shown in Fig.~\ref{fig:csqdif}. Thus clearly, the surface-term correction is giving us misleading results. \begin{figure} \includegraphics[width=2.75 true in]{fig3.eps} \caption{The sound-speed difference between the Sun and the SSM BS05(OP) (red) and the best-fit model (blue). The solar results are from \citet{basu2009}. The errors bars show $3\sigma$ uncertainty. } \label{fig:csqdif} \end{figure} As mentioned earlier, it is not usually possible to invert stellar frequencies to determine the difference in structure between star and its model (see \citealt{earl2017} for recent attempts, successes and limitations), and we therefore need other means to judge the mismatch between the internal structures of stars and their models. One such way is to look at specific frequency combinations, usually called separation ratios. The usual ones are \begin{equation} r_{01}(n)=\frac{d\nu_{01}(n)}{\nu_{n,1}-\nu_{n-1,1}},\quad\quad r_{10}(n)=\frac{d\nu_{10}(n)}{\nu_{n+1,0}-\nu_{n,0}}, \label{eq:r01} \end{equation} where, \begin{eqnarray} d\nu_{01}(n)&=&\nu_{n,0}-\frac{1}{2}\left( \nu_{n-1,1}+\nu_{n,1} \right),\nonumber\\ d\nu_{10}(n)&=&\frac{1}{2}\left( \nu_{n,0}+\nu_{n+1,0} \right)-\nu_{n,1}, \label{eq:d01} \end{eqnarray} When plotted against frequency, $d\nu_{01}$ and $d\nu_{10}$ are not usually smooth and thus sometimes higher order differences are used to define the separations (e.g., \citealt{iwr2009}, \citealt{victor2011}, \citealt{victor2015}). Another combination used extensively is \begin{equation} r_{02}=\frac{\nu_{n+1,0}-\nu_{n,2}}{\nu_{n,1}-\nu_{n-1,1}}. \label{eq:r02} \end{equation} The ratio $r_{02}$ is sensitive to the central hydrogen concentration, and hence age, of a star on the main sequence \citep{jcd1988}. The separation ratios in Eq.~\ref{eq:r01}, and Eq.~\ref{eq:r02}\ are quite insensitive to the surface term \citep[see][]{iwr2004, iwr2005, oti2005}. In Fig.~\ref{fig:ratio} we compare the frequency ratios of the SSM and best-fit models with those of the Sun, and can be seen clearly, the best-fit model does not really fit the ratios, in particular the mismatch is very large for $r_{02}$; this is of course expected since $r_{02}$ is sensitive to the age of a star, and the best-fit model has the wrong age. Thus had we used the ratios to determine a best-fit model, we would not chosen this one. \begin{figure} \includegraphics[width=3.00 true in]{fig4.eps} \caption{The separation ratios $r_{01}$, $r_{10}$ (panel a) and $r_{02}$ (panel b) of the Sun (points with error bars), the SSM (red crosses) and the best-fit model (blue triangles). The ratios for the models have been connected with a line to guide the eye. } \label{fig:ratio} \end{figure} The insensitivity of the separation ratios to the surface term have led to the creation of asteroseismic analysis pipeline that depend only on the separation ratios and not the individual frequencies (see e.g., BASTA, \citealt{victor2015}) though the precision of results obtained with frequency rations appears to be smaller than those obtained with frequencies explicitly corrected for the surface term \citep[see e.g.,][]{victor2017}. One complicating factor in using the frequency ratios is error-correlations. Errors for $r_{01}(n)$ and $r_{10}(n)$ are highly correlated, as are errors between neighboring ratios of a given type, e.g., between $r_{01}(n)$ and $r_{01}(n+1)$. Definitions of $r_{01}$ and $r_{10}$ that involve higher order differences are correlated over many more data points. The errors of neighboring $r_{02}$ are also correlated. These correlations follow from the definition of the ratios. Another quantity that can help in determining whether a model matches a star is the phase factor $\epsilon$\ defined as: \begin{equation} \nu=\langle\Delta\nu\rangle\left( n+\frac{l}{2}+\epsilon\right), \label{eq:eps} \end{equation} where $\langle\Delta\nu\rangle$ is the average large separation. It can be shown that is the internal structure of a model matches that of a star, the difference in $\epsilon$\ between then is independent of degree and is a function of frequency alone \citep{iwr2015}. Fig.~\ref{fig:epsdif} shows the differences in $\epsilon$\ between the Sun and the best-fit model; the differences are clearly degree dependent. \citet{iwr2015} demonstrated that $\epsilon$\ differences can be used to determine the characteristics of a star. \begin{figure} \includegraphics[width=3.25 true in]{fig5.eps} \caption{The difference in $\epsilon$\ between Sun and the best-fit model. Note that at any given frequency, the differences are degree dependent. } \label{fig:epsdif} \end{figure} Thus clearly the surface term correction can end up giving us misleading results about a star. It is worth noting, however, that while the best-fit model differs from the Sun in internal structure, the global parameters of the model are not wildly different. The mass of the best-fit models differs from that of the Sun by only only 1.4\%, radius by 1\%, the difference in $T_{\rm eff}$\ is negligible. The only major difference is in age, but that is only a 6.5\% difference, much better than what is obtained by non-asteroseismic studies, even of star clusters. Thus the question is whether this means that despite limitations of the surface term, we can determine robust global properties of stars using their oscillation frequencies. This is the question we try to answer in this paper. The global properties of stars are what is use by the wider astronomy community for their work and hence, it is important to examine the sensitivity of the results to the ubiquitous surface term. We use all the three techniques --- explicit correction of the surface term using Eq.~\ref{eq:bl}, the use of frequency ratios, and the use of $\epsilon$\ differences to examine whether the global properties of the stars are indeed determined robustly despite the problem of the surface term. We start with the Sun, and both components of the 16~Cyg system. These are three stars with the most well-determined oscillation frequencies. In order to be sure that the quality of the data of these stars is not misleading us, we analyze three other stars, KIC~6106415 (HD~177153), KIC~6225718 (HD~187637), and KIC~8006161 (HIP~91949). \section{The analysis techniques} \label{sec:method} We estimate the global properties of the stars we are studying by statistical analyses of a large ensemble of models. We determine the likelihood of each model and the final parameters are the likelihood-weighted average of the underlying properties of the models. What differs in each case what quantity is used to define the likelihood. Since we do not have evidence to the contrary, all inputs are assumed to have a Gaussian distribution of errors that allows us to define a $\chi^2$ value for each input. We start with the method of $\epsilon$\ differences. For each model we calculate the difference in $\epsilon$\ between the star and the model, with the differences being calculated at the observed frequencies, i.e., \begin{equation} \Delta\epsilon_{nl}=\epsilon^{\rm obs}(\nu^{\rm obs}_{nl})-\epsilon^{\rm model}(\nu^{\rm obs}_{nl}), \label{eq:epdif} \end{equation} where, $\epsilon^{\rm model}(\nu^{\rm obs}_{nl})$ is $\epsilon$\ for the model at degree $l$ interpolated to the observed frequency $\nu^{\rm obs}_{nl}$ of the same degree $l$. To determine whether or not the $\epsilon$\ differences are a function of frequency alone, we define an arbitrary function of frequency ${\mathcal F}=\sum_i^Ma_i\phi(\nu)_i$, where $\phi(\nu)$ are basis functions in frequency. We then minimize \begin{equation} \chi^2(\epsilon)= \frac{1}{N-M}\sum_{nl}\left(\frac{\Delta\epsilon_{nl}-{\mathcal F}(\nu^{\rm obs}_{nl})}{S^{\rm obs}_{nl}} \right), \label{eq:chiep} \end{equation} where $N$ is the total number of models, $M$ the number of basis function (chosen to be fewer than the number of radial modes as per the suggestion of \citealt{iwr2015}), and $S^{\rm obs}_{nl}=\sigma^{\rm obs}_{nl}/\Delta\nu^{\rm obs}$, with $\sigma^{\rm obs}_{nl}$ being the uncertainty on $\nu^{\rm obs}_{nl}$. We use B-spline basis functions. A small value of $\chi^2(\epsilon)$ indicates that the $\epsilon$\ differences are a function of frequency alone. We then define the likelihood of any given model as \begin{equation} {\mathcal L}(\epsilon)=A\exp\left(-\frac{\chi^2(\epsilon)}{2}\right), \label{eq:epslike} \end{equation} $A$ being a normalization factor such that $\sum{\mathcal L}(\epsilon)=1$ with the sum taken over all models. Next, we consider frequency ratios. To avoid large error-correlations we only use ratios $r_{01}$ and $r_{02}$ as define in Eqs.~\ref{eq:r01} and \ref{eq:r02}. We do not use $r_{10}$, and we neglect the error correlations between $r_{01}$ and $r_{02}$. We then define the likelihood for $r_{01}$ as \begin{equation} \chi^2(r_{01})=({\overbar{r_{01}}}^{({\rm obs})}-{\overbar{r_{01}}}^{({\rm model})})^T{\mathbf C}^{-1} ({\overbar{r_{01}}}^{({\rm obs})}-{\overbar{r_{01}}}^{({\rm model})}), \label{eq:xhir01} \end{equation} where ${\overbar{r_{01}}}({\rm obs})$ is the vector defining the observe $r_{01}$, ${\overbar{r_{01}}}({\rm model})$ is the vector defining the $r_{01}$ for the model at the observed frequency, and ${\mathbf C}$ is the error-covariance matrix. Thus ${\mathcal L}(r_{01})=B\exp(-\chi^2(r_{01})/2)$. We define ${\mathcal L}(r_{02})$ in an analogous manner, and thus for the two ratios taken together \begin{equation} {\mathcal L}({\rm rat})={\mathcal L}(r_{01}){\mathcal L}(r_{02}). \label{eq:ratlike} \end{equation} The third way of analyzing the models was to use surface-term corrected frequencies of the models. We define $\nu^{\rm corr}_{nl}=\nu_{nl}^{\rm model}-S$, where $S$ is defined by the right-hand side of Eq.~\ref{eq:bl}. We can then define \begin{equation} \chi^2(\nu)=\frac{1}{N-2}\sum_{nl}\frac{(\nu_{nl}^{\rm obs}-\nu_{nl}^{\rm corr})^2}{\sigma^{\rm obs}_{nl}}, \label{eq:eqchinu} \end{equation} and consequently \begin{equation} {\mathcal L}(\nu)=C\exp\left(-\frac{\chi^2(\nu)}{2}\right), \label{eq:nulike} \end{equation} $C$ being the normalization constant. The surface term is expected to be smaller at low frequencies and larger at high frequencies, however, for models that do not fit the data very well, the frequency differences can show the opposite behavior. We weigh against such models by defining a weight ${\mathcal W}({\rm low})$ where \begin{equation} {\mathcal W}_{\rm low}=\exp\left(-\frac{\chi^2_{\rm low}}{2}\right), \label{eq:wlow} \end{equation} where $\chi^2_{\rm low}$ is the $\chi^2$ for the (uncorrected) frequency differences between the two lowest frequency radial modes. It is usual to consider the observed $T_{\rm eff}$\ and metallicity in the analyses, and we have analyzed the models with and without constraints on $T_{\rm eff}$\ and metallicity. The likelihood for $T_{\rm eff}$\ is defined as \begin{equation} {\mathcal L}(T)=D\exp(-\chi^2(T)/2), \label{eq:tcal} \end{equation} with \begin{equation} \chi^2(T)=\frac{(T^{\rm obs}_{\rm eff}-T^{\rm model}_{\rm eff})^2}{\sigma^2_{ T}}, \label{eq:chit} \end{equation} where $\sigma_{T}$ is the uncertainty on the effective temperature, we consider two cases, $\sigma_{T}=100$K and $\sigma_{\rm T}=50$. We similarly define a likelihood ${\mathcal L}({\mathrm{[Fe/H]}})$ and again we consider two different uncertainties, $\sigma_{\mathrm{[Fe/H]}}=0.1$ dex and $\sigma_{\mathrm{[Fe/H]}}=0.05$ dex. For some cases we also apply a weight for age. The equations of stellar structure and evolution do not know anything about the age of the universe, and thus in principle, a model could gave an age greater than that of the universe. While a model older than the age of the universe is unphysical, uncertainties in mass, metallicity and effective temperature could easily result in such a model. Instead of removing all models above an age of 13.8 Gyr (which would result in a sharp cut-off in the probability density function of age) we use a weight for age $\tau$ defined as \begin{equation} {\mathcal W}_{\rm age}= \begin{cases} 1, & \hbox{if}\;\; \tau <=13.8\;\hbox{Gyr}\\ \exp\left[-\frac{(13.8-\tau)^2}{2\sigma^2_\tau}\right] & \hbox{otherwise},\\ \end{cases} \label{eq:wage} \end{equation} $\tau$ is in units of Gigayears, and $\sigma_{\tau}$ is chosen to be 0.1 Gyr. In Table~\ref{tab:methods} we list all the different parameter combinations that we have used to determine the global properties of the stars under question. Method~1 simply means that we calculate the likelihood of a model only with $\epsilon$, i.e., we only calculate ${\mathcal L}({\rm total})={\mathcal L}(\epsilon)$, in Method~2, it is calculated as ${\mathcal L}({\rm total})={\mathcal W}_{\rm age}{\mathcal L}(\epsilon)$, in Method~3 corresponds to ${\mathcal L}({\rm total})={\mathcal L}(\epsilon){\mathcal L}(T){\mathcal L}(\hbox{[Fe/H]})$, etc. \begin{deluxetable*}{cll} \tablecolumns{3} \tablecaption{The different combinations of observables used in the analysis} \tablehead{\colhead{Method No.}& \colhead{Observables used} &\colhead{Notes}\label{tab:methods}} \startdata \phantom{1}1 & $\epsilon$ &\\ \phantom{1}2 & $\epsilon$ & additional weights for age using ${\mathcal W}_{\rm age}$\\ \phantom{1}3 & $\epsilon$, $T_{\rm eff}$, $\mathrm{[Fe/H]}$ & using $\sigma_{ T}=100$K and $\sigma_{\mathrm{[Fe/H]}}=0.1$ dex\\ \phantom{1}4 & $\epsilon$, $T_{\rm eff}$, $\mathrm{[Fe/H]}$ & using $\sigma_{ T}=75$K and $\sigma_{\mathrm{[Fe/H]}}=0.05$ dex\\ \phantom{1}5 & $\epsilon$, $T_{\rm eff}$, $\mathrm{[Fe/H]}$ & as in Method 4, but weighted for age using ${\mathcal W}_{\rm age}$\\ \phantom{1}6 & $r_{01},\;r_{02}$ &\\ \phantom{1}7 & $r_{01},\;r_{02}$ & additional weights for age using ${\mathcal W}_{\rm age}$\\ \phantom{1}8 & $r_{01},\;r_{02}$, $T_{\rm eff}$, $\mathrm{[Fe/H]}$ & using $\sigma_{ T}=100$K and $\sigma_{\mathrm{[Fe/H]}}=0.1$ dex\\ \phantom{1}9 & $r_{01},\;r_{02}$, $T_{\rm eff}$, $\mathrm{[Fe/H]}$ & using $\sigma_{ T}=75$K and $\sigma_{\mathrm{[Fe/H]}}=0.05$ dex\\ 10 & $r_{01},\;r_{02}$, $T_{\rm eff}$, $\mathrm{[Fe/H]}$ & as in Method 9, but weighted for age using ${\mathcal W}_{\rm age}$\\ 11 & $\nu$ &\\ 12 & $\nu$ & additional weights for age using ${\mathcal W}_{\rm age}$\\ 13 & $\nu$, $T_{\rm eff}$, $\mathrm{[Fe/H]}$ & using $\sigma_{ T}=100$K and $\sigma_{\mathrm{[Fe/H]}}=0.1$ dex\\ 14 & $\nu$, $T_{\rm eff}$, $\mathrm{[Fe/H]}$ & using $\sigma_{ T}=75$K and $\sigma_{\mathrm{[Fe/H]}}=0.05$ dex\\ 15 & $\nu$, $T_{\rm eff}$, $\mathrm{[Fe/H]}$ & as in Method 14, but weighted for age using ${\mathcal W}_{\rm age}$\\ 16 & $\nu$, $T_{\rm eff}$, $\mathrm{[Fe/H]}$ & as in Method 15, but with additional weight ${\mathcal W}_{\rm low}$\\ \enddata \end{deluxetable*} \section{The data} \label{sec:stars} \begin{deluxetable*}{lcccc} \tablecolumns{5} \tablecaption{Average seismic parameters and surface properties\label{tab:data}} \tablehead{\colhead{Star} & \colhead{$\Delta\nu$} & \colhead{$\nu_{\rm max}$} & \colhead{$T_{\rm eff}$} &\colhead{$\mathrm{[Fe/H]}$}\\ & \colhead{($\mu$ Hz)} & \colhead{($\mu$ Hz)} & \colhead{(K)} & } \startdata Sun & $134.91 \pm 0.02$ & $3073 \pm 13$ & $5772 \pm 10$ & $0.0 \pm 0.0$\\ 16~Cyg~A & $103.28 \pm 0.02$ & $2188 \pm 5 $ & $5825 \pm 50$ & $0.01\pm 0.03$\\ 16~Cyg~B & $116.93 \pm 0.02$ & $2561 \pm 5 $ & $5750\pm 50$ & $0.05 \pm 0.02$ \\ KIC~6106415 & $104.07 \pm 0.03$ & $2249 \pm 5 $ & $6037 \pm 77 $& $ -0.04\pm 0.1$\\ KIC~6225718 & $105.07 \pm 0.02$ & $2364 \pm 5 $ & $6313 \pm 77$ & $ -0.07\pm 0.1$\\ KIC~8006161 & $149.43 \pm 0.02$ & $3575 \pm 11$ & $5488 \pm 77$ & $ 0.34 \pm 0.1$\\ \enddata \end{deluxetable*} We analyze six stars in this paper. The first three, the Sun, 16~Cyg A and 16~Cyg B not only have precise data on frequencies, they also have precise data on their effective temperatures and metallicities. For the Sun, we use the degraded solar frequencies used by \citet{victor2017} as obtained by \citet{lund2017}. For each star, we use frequencies, large separations and $\nu_{\rm max}$\ values from \citet{lund2017}, although for $\Delta\nu$\ and $\nu_{\rm max}$, we assume that the uncertainty is the larger of the two limits provided. We analyze three others stars: KIC~6106415 (HD~177153), KIC~6225718 (HD~187637), and KIC~8006161 (HIP~91949). The frequencies of these stars are not as precise as those of the others, and the $T_{\rm eff}$\ and $\mathrm{[Fe/H]}$\ values are not known as precisely as those of the other stars either. The results for these stars allow us to judge the dependence of our results on the quality of the data. For the Sun, we assume the $T_{\rm eff}$\ value of \citet{mamajek2015}. For the components of the 16~Cyg system we use $T_{\rm eff}$\ and $\mathrm{[Fe/H]}$\ from \citet{ramirez2009}, KIC~6106415, KIC~6225718 and KIC~8006161 we use $\mathrm{[Fe/H]}$\ from \citet{lars2015} and temperatures derived by Buchhave \& Latham as listed in \citet{savita2017}. Table~\ref{tab:data} lists the average seismic properties, $T_{\rm eff}$\ and $\mathrm{[Fe/H]}$\ of the stars being studied. \section{Construction of stellar models} \label{sec:models} We constructed a large number of models for each star with YREC, the Yale stellar evolution code \citep{demarque2008}. The models were constructed with the OPAL equation of state \citep{eos}. OPAL opacities \citep{opac} supplemented with low-temperature opacities from \citet{ferg} were used. We used the NACRE reaction rates from \citet{nacre} except for the $^{14}N(p,\gamma)^{15}O$ reaction, where we used the rates of \citet{marta}. We included diffusion and settling of heavy elements using the coefficients of \citet{thoul}. To avoid the problem of heavy-elements draining out of the outer convection zones of hot stars in an unphysical manner, we artificially reduced the diffusion coefficients as a function of total mass as: \begin{equation} C_{\rm diff}= \begin{cases} 1, & \hbox{if}\;\; M <=1.25\\ \exp\left[-\frac{(M-1.25)^2}{2(0.085)^2}\right] & \hbox{otherwise},\\ \end{cases} \label{eq:diffcoef} \end{equation} where $M$ is the mass of the model in units of M$_\odot$, and $C_{\rm diff}$ a multiplicative factor for the \citet{thoul} diffusion coefficients. \begin{deluxetable}{lcc} \tablecolumns{3} \tablecaption{Range of $T_{\rm eff}$\ and initial $\mathrm{[Fe/H]}$\ of models\label{tab:range}} \tablehead{\colhead{Star} & \colhead{$T_{\rm eff}$} &\colhead{$\mathrm{[Fe/H]}$}\\ & \colhead{(K)} & } \startdata Sun & 5550--5950 & $-0.33$--$0.47$ \\ 16~Cyg~A & 5640--6040 & $-0.20$--$0.60$ \\ 16~Cyg~B & 5610--6010 & $-0.25$--$0.55$ \\ KIC~6106415 & 5730--6345 & $-0.30$--$0.40$ \\ KIC~6225718 & 6005--6620 & $-0.10$--$0.70$ \\ KIC~8006161 & 5180--5800 & $\phantom{-}0.10$--$0.80$\\ \enddata \end{deluxetable} We start the modelling process with the observed $\Delta\nu$, $\nu_{\rm max}$, $T_{\rm eff}$, and $\mathrm{[Fe/H]}$\ of each star, along with their uncertainties. Using these we created many realizations of the four quantities. For $\Delta\nu$, and $\nu_{\rm max}$, we assumed that the realizations have a Gaussian distribution with a $\sigma$ of four times the uncertainty in each quantity. We assumed flat distributions for $T_{\rm eff}$ and $\mathrm{[Fe/H]}$. Since the observed $\mathrm{[Fe/H]}$\ is the present day metallicity, we assumed that the initial metallicity was about 0.1~dex higher than the present one for the purpose of constructing models. The spread in current $T_{\rm eff}$\ and initial $\mathrm{[Fe/H]}$\ for models of each star are listed in Table~\ref{tab:range}. For the initial helium abundance $Y_0$ we assumed a flat distribution between $0.24$ and $0.34$. The initial $\mathrm{[Fe/H]}$\ was converted to initial hydrogen abundance $X_0$ and heavy-element abundance $Z_0$ using the initial $Y_0$ assuming the \citet{gs98} solar metallicity scale, i.e., $\hbox{[Fe/H]}=0\equiv Z/X=0.023$. For any given realization, we have a set of parameters ($\Delta\nu$, $\nu_{\rm max}$, $T_{\rm eff}$, $X_0$, $Z_0$, $Y_0$). For each set, $\Delta\nu$\ and $\nu_{\rm max}$\ were converted to mass $M$ and radius $R$ using the asteroseismic scaling relations corrected as per the prescription of \citet{guggenberger2016}. This gave us sets of ($M$, $R$, $T_{\rm eff}$, $X_0$, $Z_0$), where $M$, $X_0$, $Z_0$ are inputs to the models and we want a model of radius $R$ at the specified $T_{\rm eff}$\ to be the output. This is done by using YREC in an iterative manner where we iterate over $\alpha_{\rm MLT}$, the mixing length parameter, until we obtain a models that has the required radius at the specified $T_{\rm eff}$. The condition on the observed present-day metallicity is applied {\it post facto} when we use Methods~3--5, 8--10, and 13-16 to analyze the ensemble of models; it is ignored otherwise. In Fig.~\ref{fig:alpha}\ we show an example of how $\alpha_{\rm MLT}$ has to change with $T_{\rm eff}$\ in order for a 1M$_\odot$\ model to have a radius of 1R$_\odot$\ at that value of $T_{\rm eff}$. \begin{figure} \includegraphics[width=2.50 true in]{fig6.eps} \caption{The mixing-length parameters $\alpha_{\rm MLT}$ required for a 1M$_\odot$, $\mathrm{[Fe/H]}$=0 model to have a radius of 1R$_\odot$\ at different values of $T_{\rm eff}$. We show the case for three different values of $Y_0$. Unlike rest of the models used in this work, models in this figure were constructed without diffusion and settling of heavy elements to ensure that the models have the exact value of the current surface metallicity. } \label{fig:alpha} \end{figure} The frequencies of each model were calculated using the code of \citet{ab94} and were then used to calculate $\epsilon$, and frequency ratios $r_{01}$ and $r_{02}$. For models of any given star, we only used those modes that have been observed. The frequency difference of each model with respect to the observed frequencies were fitted to the Ball \& Gizon form (Eq.~\ref{eq:bl}) to determine the surface-term corrected frequencies. The ensemble of models of each star were then analyzed using different measures as listed in Table~\ref{tab:methods}. \section{Results} \label{sec:res} The mass, radius and age of the Sun, and the two components of the 16~Cyg system as derived using the sixteen different combinations are shown in Fig.~\ref{fig:rma1}. It is very clear from the figure that central value of the estimates (i.e., the likelihood weighted means) are very similar in each case. The uncertainty on the results on the other hand, are somewhat different. The methods that explicitly use the surface-term corrected frequencies appear to have the lowest uncertainties, however, at least for the Sun where we have independent measures of mass and radius, the surface-term corrected frequencies give the largest systematic errors. But in all cases, the results are well within 1$\sigma$ of the actual value. Unlike radius and mass, age uncertainties are lowered considerably when $T_{\rm eff}$\ and $\mathrm{[Fe/H]}$\ are considered explicitly --- not surprising since the age of a model at a given mass and radius depends critically on its temperature and metallicity. It should be noted that all results presented here are consistent with those of \citet{victor2017}. \begin{figure*} \centerline{ \includegraphics[width=2.25 true in]{fig7_l.eps} \includegraphics[width=2.25 true in]{fig7_m.eps} \includegraphics[width=2.25 true in]{fig7_r.eps} } \caption{Estimates of mass, radius and age of the Sun, 16 Cyg~A and 16 Cyg~B obtained using goodness-of-fit measures to different observables as listed in Table~\ref{tab:methods}. The numbers on the abscissa refer to the method number in the table. Red, blue and orange points denote the fact that the seismic variable used is $\epsilon$, frequency ratios and surface-term corrected frequencies respectively. The horizontal lines for the solar case are the known solar values. For the other cases we plot the mean of the results obtained. The vertical extent of the panels shows a fixed fraction of the values --- $\pm 5$\% for radius, $\pm 10$\% for mass, and $\pm 30$\% for age --- to give an indication of the precision of the estimates. } \label{fig:rma1} \end{figure*} Asteroseismic analyses use $T_{\rm eff}$\ and $\mathrm{[Fe/H]}$\ to complement the data set, however, we attempted to examine how well we could estimate the temperature and metallicity of stars from seismic data alone. The results for the Sun and the 16 Cyg system are shown in Fig.~\ref{fig:tz}. We find that temperature can be localized to better than $\pm 100$K when no constraint is applied, when we do apply a temperature constraint, we find that we recover temperature to about $\pm 70$K regardless of whether we assumed a Gaussian spread of 100~K or 75~K. We should note however, that although we assumed a flat prior in temperature, the models were restricted to a finite (400~K) range in temperature, and that could have an effect on the final spread of the results. Metallicity results are worse; unless we apply a constraint, we can recover $\mathrm{[Fe/H]}$\ to only between 0.1 and 0.2~dex. We get more precise results when the explicit surface-term corrections are used, but judging by the solar case, that can result in larger systematic errors. \begin{figure*} \centerline{ \includegraphics[width=2.25 true in]{fig8_l.eps} \includegraphics[width=2.25 true in]{fig8_m.eps} \includegraphics[width=2.25 true in]{fig8_r.eps} } \caption{$T_{\rm eff}$\ and $\mathrm{[Fe/H]}$\ for the Sun, 16 Cyg~A and 16 Cyg~B. In the panels that show results for the Sun, the horizontal line marks the known solar value; in the other panels the line is simply an arithmetic average of all the results shown in the panel. } \label{fig:tz} \end{figure*} \begin{figure} \includegraphics[width=2.75 true in]{fig9.eps} \caption{The initial helium abundance and the current convection-zone helium abundance of the Sun. The solid horizontal line in each panel marks the helioseismically determine value of that quantity --- $Y_0$ from \citet{aldo2010} and $Y_{\rm CZ}$ from \citet{basu1998}. Dotted lines show 1$\sigma$ uncertainties of the helioseismic estimates. } \label{fig:ycz} \end{figure} \begin{figure} \includegraphics[width=2.75 true in]{fig10.eps} \caption{The initial helium and heavy-element abundance of 16 Cyg A (large red points) and 16 Cyg B (small blue points). } \label{fig:cygy} \end{figure} For the Sun, we find that no matter which method we use, we can reproduce the helioseismically determined initial helium abundance of the Sun \citep{aldo2010} as well the current convection-zone helium abundance for the Sun \citep[see][]{basu1998}. These are shown in Fig.~\ref{fig:ycz}. The precision for $Y_0$ is however, a factor of five worse than that obtained from the detailed helioseismic study. The uncertainty in $Y_{\rm CZ}$ is a factor of ten worse. However, the difference between $Y_0$ and $Y_{\rm CZ}$ does give a good estimate of the amount of helium that has settled out of the convection zone. The analysis of the two components of the 16~Cyg system shows that estimates of $Y_0$ and $Z_0$ are the same for both components of the system (Fig.~\ref{fig:cygy}) even though we did not take into account the fact that the two stars should have the same initial composition. \begin{figure*} \centerline{ \includegraphics[width=2.25 true in]{fig11_l.eps} \includegraphics[width=2.25 true in]{fig11_m.eps} \includegraphics[width=2.25 true in]{fig11_r.eps} } \caption{Mass, radius and age estimates for KIC~6106415, KIC~6225718 and KIC~8006161. The colors are the same as in Fig.~\ref{fig:rma1} The horizontal line in each panel is the average of all the values in that panel. } \label{fig:rma2} \end{figure*} The Sun and the two stellar components of the 16~Cyg system have the most precisely determined frequencies, and this leads to the question of what happens if the frequency-set is not as extensive, or if the errors on the modes larger. To answer this we analyzed the data of the other three stars and the results are shown in Fig.~\ref{fig:rma2}. As as be seen, the results obtained for a given star with using the different asteroseismic parameters are consistent with each other, and all results agree well within 1$\sigma$ of each other. The worse precision of the frequency-sets of these stars caused the worse precision of the results for these stars; the best precision is obtained when the frequencies are explicitly corrected for the surface term, but as before there there could be a larger systematic error in this case, which we have no means of evaluating independently. \begin{figure*} \centerline{ \includegraphics[width=2.25 true in]{fig12_l.eps} \includegraphics[width=2.25 true in]{fig12_m.eps} \includegraphics[width=2.25 true in]{fig12_r.eps} } \caption{The initial helium abundance $Y_0$ and initial heavy-element abundance $Z_0$ of KIC~6106415, KIC~6225718 and KIC~8006161. The colors are the same as in Fig.~\ref{fig:rma1} The horizontal line in each panel is the average of all the values in the panel. } \label{fig:sy} \end{figure*} Estimates of $Y_0$ and $Z_0$ for KIC~6106415, KIC~6225718 and KIC~8006161 are different though. The spread in values is much larger (see Fig.~\ref{fig:sy}), particularly for KIC~6225718 and KIC~8006161, the two stars with comparatively larger frequency uncertainties. However, the results are still the same within 1$\sigma$, except that $\sigma$ is large. For these stars, meaningful results for $Z_0$ are obtained only for cases where $T_{\rm eff}$\ and surface metallicity are explicitly taken into account in the analysis. \begin{deluxetable*}{lccccc} \tablecolumns{6} \tablecaption{Spread in results (in percent) caused by different ways of handling the surface term. The Method numbers are the same as those in Table~\ref{tab:spread}} \tablehead{\colhead{Methods}& \multicolumn{5}{c}{\% Spread in stellar-parameter estimates} \\ \colhead{compared} & \colhead{Mass} & \colhead{Radius} & \colhead{Age} & \colhead{$Y_0$} & \colhead{$Z_0$} \label{tab:spread}} \startdata 1, \phantom{1}6, 11 & 1.1& 0.45 & 2.7 & 2.6 & 6.1\\ 2, \phantom{1}7, 12 & 1.1& 0.45 & 2.8 & 2.5 & 6.3\\ 3, \phantom{1}8, 13 & 1.0& 0.40 & 2.9 & 2.9 & 2.9\\ 4, \phantom{1}9, 14 & 1.2& 0.46 & 1.9 & 2.5 & 1.4\\ 5, 10, 15 & 1.1& 0.42 & 1.9 & 2.3 & 1.3\\ \enddata \end{deluxetable*} The spread in the global parameters obtained using the different ways of accounting for the surface term is listed in Table~\ref{tab:spread}. The different rows correspond to analyses when different non-seismic constraints were used. The first row, for example, looks at the spread in results when no non-seismic constraint was used. As can be seen, the change in radius is typically less than 0.5\% and the change in mass is about 1\%. These numbers are smaller than the typical uncertainties in mass and radius caused by uncertainties in asteroseismic and spectroscopic data { \citep[see e.g.,][]{metcalfe2014, victor2015, victor2017}} The effect on ages is larger and depends on whether or not $T_{\rm eff}$\ and $\mathrm{[Fe/H]}$\ have been use, and the effect is at the 2-3\% level, much smaller than the usual uncertainties in age. The effect on $Y_0$ is very similar to that on age. As far as $Z_0$ is concerned, the effect of the surface term appears to be tied to whether or not $\mathrm{[Fe/H]}$\ is used as a constraint, and the spread can be as high as 6\% when $\mathrm{[Fe/H]}$\ is not used to constrain the results. We should emphasize that the seeming insensitivity of the results to the way the surface term is accounted for does not mean that we have the option to ignore it altogether. { When a sufficient number of modes is available (as is the case of the six stars that have been showcased in this investigation) the $\chi^2$ for the best fit to the frequencies without a surface term correction is very large. This occurs because models that give a good fit to the low-frequency modes, do not fit the high frequency modes, and vice versa. Models that give better fits to the low-frequency have properties that are closer to that of the star than models that fit the high frequency end. This is a result of the fact that the change in frequency due to the surface term increases with mode frequency. The more problematic case is when modes are available only over a small frequency range (2--3 $\Delta\nu$) around $\nu_{\rm max}$, in that case one can get a good fit in terms of $\chi^2$, but the chances of fitting a model with wrong parameters is high. Thus without a proper accounting of the surface term, the results obtained will not be reliable.} In this work we have not tested the effect of systematic errors in $T_{\rm eff}$\ and $\mathrm{[Fe/H]}$. However, a simple analyses of the solar case shows that in the event that the systematic error in temperature is of the order of uncertainties in temperature, the effect on mass is no more than 1\%, and the effect on radius is much lower. The effect in age however, can be larger, of the order of a few per cent. The effect of metallicity errors is slightly larger. These effects are being examined in detail in a separate investigation (Bellinger et al. {\it in preparation}). \section{Discussion and Conclusions} \label{sec:conclu} We have demonstrated that how the surface term in asteroseismic data is accounted for does not affect estimates of the mass, radius or age of a star. No matter how the surface term is dealt with, whether by bypassing the term issue completely by using $\epsilon$, minimizing its contribution using ratios, or explicitly removing it using a model for the surface term, we get very consistent results for the most important global properties of the stars. The spread in mass because of the different ways of constraining the surface term is only about 1\%, the spread is radius is less than 0.5\%, while the spread in age, when $T_{\rm eff}$\ and $\mathrm{[Fe/H]}$\ is used is about 2\%. All these are lower than the typical uncertainties in this quantities caused by data errors. Meaningful estimates of initial helium and metallicity require a knowledge of the effective temperature and current metallicity of the stars; once the spectroscopic data are taken into account, the results are robust with small enough uncertainties to make the results meaningful. { We used all available radial, dipole and quadrupole modes for the stars studied in this paper. We did not use octopole modes even when available since they can observed in very few stars from photometric data.} The robustness of the results to surface term corrections hold as long as we have enough modes of each degree to determine if $\epsilon$\ has a degree-dependence, which means that we need modes of at least two different degrees, or that we have enough modes to calculate a sufficient number of frequency ratios $r_{01}$ and $r_{02}$. { Mode sets with as few as four to five orders per degree give good results. The range of frequencies covered by the available modes available does not matter for the $\epsilon$-matching or the ratios method, however, it is important when the surface term is calculated explicitly. The \citet{ball2014} formulation and the \citet{hans} formulation can only be fitted over a limited frequency range. } { The very low frequency range where surface term corrections are small ($ \lesssim 2000\; \mu$Hz for the solar case, see Fig.~\ref{fig:ssmsurf}), or at high frequencies where the surface term turns over ($\gtrsim 4000\; \mu$Hz for the solar case), are fitted badly by these forms.} For these regions other forms { of the surface term need to be used. One can, for instance, explicitly use the scaled solar surface term as was done by the ``ASFIT'' and ``YMCM'' analyses described in Appendix~A of \citet{victor2015}.} However, with photometry measurements, the likelihood of obtaining { frequencies so low that the functional forms do not apply is small since the signal from granulation masks the modes, and high-frequency modes usually have large enough uncertainties that they do not play much of a role in determining stellar properties.} All six stars in our sample are main sequence stars. A valid question to ask is what happens to more evolved stars, particularly to subgiants. { We do not expect the results to be significantly different from what we have found for the main sequence stars. The p-mode frequencies of subgiants are affected by the surface term the same way as the p modes of main sequence stars. Given that the radius and mass of stars can be estimated precisely with only average asteroseismic parameters $\Delta\nu$\ and $\nu_{\rm max}$ \citep[see e.g.][]{ning}, the pure p modes of subgiants stars are enough to determine radius and mass precisely, and thus these quantities will should only be affected to the same extent as those for main sequence stars.} Age is a somewhat different issue. If we look for age precision at the same level as those for main sequence stars, then the results of this work should apply even for evolved stars. However, much more precise ages can be obtained for subgiants if mixed-mode frequencies are used \citep{metcalfe2010, deheuvels}. The frequency of mixed modes is a very sensitive function of age, and thus the broad-brush statistical technique is not the best if we are to determine the age of a subgiant to the precision that is possible. To get the better results, we need to start with the best-fit stellar parameters that fit the p-modes and search around that. Since mixed modes have high inertia, and thus less affected by surface effects, we expect the final result to be insensitive to how the surface term in p modes was removed. However, on the red giant branch where all dipole modes are mixed modes, the usual surface term corrections may not apply at all \citep{ball2018} and the situation will be very different, unless we confine ourselves to only using the radial and quadrupole modes. { The expected results for subgiants and red giants need to be tested properly with real data.} It should be noted that our results pertain to the effects of the surface term alone, the model dependence of the different estimates is beyond the scope of this paper. In particular, one needs to keep in mind that age estimates of stars are always model dependent. \acknowledgments We would like to than the referee for constructive comments that have helped in improving this paper. SB would like to acknowledge partial support from NSF grant AST-1514676 and NASA grant NNX16AI09G. \facility{{\it Kepler}} \software{ YREC \citep{demarque2008}}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,128
The Department of Finance, established by City Charter and the Codified Ordinances, oversees all financial functions and activities of the City. Here you can utilize resources to pay taxes and utility bills and access vendor, purchasing, and accounting reports and information. The City of Springfield's procurement functions are centralized, with all formal purchases (generally over $50,000) and informal purchases (under $50,000) being handled by the Purchasing Division. Other than posting procurement activities on this web site, we advertise formal bids and request for proposals in the Springfield News-Sun on Mondays, Wednesdays, and Fridays. For purchasing requests and questions, please contact Brandy Bubp at (937) 324-7333. The Income Tax Division is responsible for administering the Income Tax Ordinances: Chapter 195 –Tax on Earned Income and Chapter 196 – Municipal Income Tax on Earned Income. Tax Office employees are available to provide assistance in the filing of income tax returns, processing income tax payments, and answering tax questions. The Utility Billing Division is responsible for assessing, billing, and collecting for utility services provided by the City.
{ "redpajama_set_name": "RedPajamaC4" }
2,614
Dendrobium platylobum är en orkidéart som först beskrevs av Rudolf Schlechter, och fick sitt nu gällande namn av Johannes Jacobus Smith. Dendrobium platylobum ingår i släktet Dendrobium, och familjen orkidéer. Inga underarter finns listade i Catalogue of Life. Källor Orkidéer platylobum
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,137
Q: \write and pgfplotstable: Adding \pgfplotstableread into \loop goes wrong I get a code from @DavidCarlisle, that generate output-txt-files, and I want to put all the outputfiles in a pgfplotstable together. So I try adding \ifnum\the\filecount=1 \pgfplotstableread[col sep=comma]{data1.txt}{\main} \else {.........} \fi But this gives only some strange Omegas and \pgfplotstabletypeset[col sep=comma]{\main} does not work. What have I to do? \documentclass[a4paper]{article} \usepackage{pgfplotstable} \newcount\filecount \newwrite\cisout \begin{document} { \endlinechar=\newlinechar% \filecount=1 % \def\aaa{file number \the\filecount}% \loop% \immediate\openout\cisout=data\the\filecount.txt % \immediate\write\cisout{% 111, 222, \aaa }% \immediate\closeout\cisout % \advance\filecount by 1 % \ifnum\filecount<5 % \ifnum\the\filecount=1 \pgfplotstableread[col sep=comma]{data1.txt}{\main} \else\fi \repeat % }% \section{pgfplotstable Test - bad} %\pgfplotstabletypeset[col sep=comma]{\main} \dots and some bad Omegas above\dots \section{input Test - good} \input{data1.txt} \input{data3.txt} \end{document} A: You're doing \ifnum\filecount=1 when the counter has already been advanced, so the code for \pgfplotstableread is never executed. Besides, even if you manage to execute it, the whole \loop is in a group, so \main will be forgotten at the end of the group. The Omegas were produced because \fi was not followed by %. Here's a working code, although it's not clear what your aim is. \documentclass[a4paper]{article} \usepackage{pgfplotstable} \newcount\filecount \newwrite\cisout \begin{document} \filecount=1 \def\aaa{file number \the\filecount}% \loop \immediate\openout\cisout=data\the\filecount.txt \immediate\write\cisout{% 111, 222, \aaa } \immediate\closeout\cisout \ifnum\the\filecount=1 \pgfplotstableread[col sep=comma]{data1.txt}{\main}\fi \advance\filecount by 1 \ifnum\filecount<5 \repeat \section{pgfplotstable Test - bad} \pgfplotstabletypeset[col sep=comma]{\main} \section{input Test - good} \input{data1.txt} \input{data3.txt} \end{document} A: Ultimately, is this what you are trying to achieve? \documentclass[a4paper]{article} \usepackage{pgfplotstable} \newcount\filecount \newwrite\cisout \begin{document} { \filecount=0 \immediate\openout\cisout=data1.txt \immediate\write\cisout{a, b, c}% write header \loop\ifnum\filecount<5 \advance\filecount by 1 \immediate\write\cisout{111, 222, \the\filecount }% \repeat% no \fi needed \immediate\closeout\cisout \pgfplotstableread[col sep=comma]{data1.txt}{\main}% \pgfplotstabletypeset\main \end{document} This version uses \foreach. \documentclass[a4paper]{article} \usepackage{pgfplotstable} \newwrite\cisout \begin{document} \immediate\openout\cisout=data1.txt \immediate\write\cisout{a, b, c}% write header \foreach \i in {1,..., 5}% {\immediate\write\cisout{111, 222, \i }}% \immediate\closeout\cisout \pgfplotstableread[col sep=comma]{data1.txt}{\main}% \pgfplotstabletypeset\main \end{document}
{ "redpajama_set_name": "RedPajamaStackExchange" }
850
\section{INTRODUCTION} Prime submodules of modules were introduced as a generalization of prime ideals of rings by J. Dauns \cite{Dau78} and several algebraists carried out an intensive and systematic study of the spectrum of prime submodules (e.g. \cite{lu84}, \cite{mm92}, \cite{lu95}, \cite{mms97}, \cite{mms98}, \cite{lu99}, \cite{mms02}, \cite{lu07}). Here, quasi-prime submodules of $M$ as a generalization of prime submodules are introduced. We also, investigate the quasi-primeful modules and we apply them to develop of topological properties of $q\Spec(M)$, where $q\Spec(M)$ is the set of all quasi-prime submodules of $M$. The Zariski topology on the spectrum of prime ideals of a ring is one of the main tools in Algebraic Geometry. In the literature, there are many different generalizations of the Zariski topology of rings to modules (see \cite{mms97}, \cite{behI08}, \cite{behII08}, or \cite{lu99}). In this paper, we are going study the developed Zariski topology as a generalization of the Zariski topology considered in \cite{lu99}, to $q\Spec(M)$, where $M$ is an $R$-module. As is well known, the Zariski topology has been defined on the set of all prime submodules of a module. Here, we considered developed Zariski topology on the set of all quasi-prime submodules of a module. Throughout this paper, all rings are commutative with identity and all modules are unital. For a submodule $N$ of an $R$-module $M$, $(N :_R M)$ denotes the ideal $\{r\in R \mid rM\subseteq N\}$ and annihilator of $M$, denoted by $\Ann_R(M)$, is the ideal $(\textbf{0}:_R M)$. $M$ is called faithful if $\Ann(M)=(0)$. If there is no ambiguity we write $(N : M)$ (resp. $\Ann(M)$) instead of $(N :_R M)$ (resp. $\Ann_R(M)$). A proper ideal $I$ of a ring $R$ is said to be \emph{quasi-prime} if for each pair of ideals $A$ and $B$ of $R$, $A \cap B \subseteq I$ yields either $A\subseteq I$ or $B \subseteq I$ (see \cite{az08}, \cite{Bou70} and \cite{hen02}). It is easy to see that every prime ideal is a quasi-prime ideal. Also, every quasi-prime ideal is irreducible (an ideal $I$ of a commutative ring $R$ is said to be \emph{irreducible} if $I$ is not the intersection of two ideals of $R$ that properly contain it). A submodule $N$ of an $R$-module $M$ is said to be \emph{prime} if $N\neq M$ and whenever $rm \in N$ (where $r\in R$ and $m \in M$), then $r\in (N : M)$ or $m \in N$. If $N$ is prime, then the ideal $\p = (N : M)$ is a prime ideal of $R$. In this circumstances, $N$ is said to be \emph{$\p$-prime} (see \cite{lu84}). A submodule $Q$ of an $R$-module $M$ is said to be \emph{primary} if $Q\neq M$ and if $rm \in Q$, where $r \in R$ and $m \in M$ implies that either $m \in Q$ or $r \in \q=\sqrt{(Q : M)}$. If $Q$ is primary, then $(Q : M)$ is a primary ideal of $R$. In this case we say that $Q$ is \emph{$\q$-primary}, where $\q = \sqrt{(Q : M)}$ is a prime ideal of $R$. The set of all prime submodules of an $R$-module $M$ is called the \emph{prime spectrum} of $M$ and denoted by $\Spec(M)$. Similarly, the collection of all $\p$-prime submodules of an $R$-module $M$ is designated by $\Spec_{\p}(M)$ for any $\p\in \Spec(R)$. We remark that $\Spec(\textbf{0}) = \emptyset$ and that $\Spec(M)$ may be empty for some nonzero module $M$. For example, the $\mathbb{Z}(p^{\infty})$ as a $\mathbb{Z}$-module has no prime submodule for any prime integer $p$ (see \cite{lu95}). Such a module is said to be \emph{primeless}. An $R$-module $M$ is called \emph{primeful} if either $M =(\textbf{0})$ or $M\neq (\textbf{0})$ and the map $\Phi : \Spec(M)\rightarrow \Spec(R/\Ann(M))$ defined by $\Phi(P) = (P : M)/\Ann(M)$ for every $P\in \Spec(M)$, is surjective (see \cite{lu07}). The set of all maximal submodules of an $R$-module $M$ is denoted by $\Max(M)$. The \emph{Jacobson radical} $\Rad(M)$ of a module $M$ is the intersection of all its maximal submodules. $\Rad(M) = M$ when $M$ has no any maximal submodule. By $N\leq M$ we mean that $N$ is a submodule of $M$. Let $\p$ be a prime ideal of $R$, and $N\leq M$. By the \emph{saturation of $N$ with respect to $\p$}, we mean the contraction of $N_{\p}$ in $M$ and designate it by $S_{\p}(N)$ and we say $N$ is \emph{saturated with respect to} $\p$ if $S_{\p}(N)=N$ (see \cite{lu03}). An $R$-module $M$ is called a \emph{multiplication} module if every submodule $N$ of $M$ is of the form $IM$ for some ideal $I$ of $R$. For any submodule $N$ of an $R$-module $M$ we define $V^M(N)$ to be the set of all prime submodules of $M$ containing $N$. The \emph{radical} of $N$ defined to be the intersection of all prime submodules of $M$ containing $N$ and denoted by $\rad_M(N)$ or briefly $\rad(N)$. $\rad_M(N) = M$ when $M$ has no any prime submodule containing $N$. In particular, $\rad(\textbf{0}_M)$ is the intersection of all prime submodules of $M$. If $V^M(N)$ has at least one minimal member with respect to the inclusion, then every minimal member in this form is called a \emph{minimal prime submodule of $N$} or a \emph{prime submodule minimal over $N$}. A minimal prime submodule of $(\textbf{0})$ is called \emph{minimal prime submodule of $M$}. A quasi-prime submodule $N$ of an $R$-module $M$ is called \emph{minimal quasi-prime} if, for any quasi-prime $K$ of $M$ such that $K\subseteq N$, this is the case that $K=N$. An $R$-module $M$ is said to be \emph{semiprimitive} (resp. reduced) if the intersection of all maximal (resp. prime) submodules of $M$ is equal to zero. A submodule $N$ of an $R$-module $M$ is said \emph{quasi-semiprime} if it is an intersection of quasi-prime submodules. We recall that an $R$-module $M$ is \emph{co-semisimple} in case every submodule of $M$ is the intersection of maximal submodules (see \cite[p.122]{ful92}). Every proper submodule of a co-semisimple module is a quasi-semiprime submodule. In Section $2$, we obtain some properties of quasi-prime submodules. In this section the relations between quasi-prime submodules of a module $M$ and quasi-prime submodules of localizations of $M$ are studied. We also investigate the quasi-primeful modules and we apply them to develop topological properties of $q\Spec(M)$. We show in Theorem~\ref{mapa} that an $R$-module $M$ is quasi-primeful whenever $R$ is a $PID$ and $M$ is finitely generated, or $R$ is Laskerian and $M$ is a locally free $R$-module. We study some main properties of quasi-primeful modules in Proposition \ref{paracor} and also the quasi-prime-embedding modules are studied in Theorem \ref{freet0}. It is shown that an $R$-module $M$ is top in the cases $R$ is a one dimensional Noetherian domain and either $M$ is weak multiplication or for every prime ideal $\p\in \Spec(R)$, $|\Spec_{\p}(M)|\leq 1$ and $S_{(0)}(\textbf{0})\subseteq \rad(\textbf{0})$. In Section $3$, we introduce a topology on the set of quasi-prime submodules in such a way that the Zariski topology (see \cite{lu99}) is a subspace of this topology and some concerned properties are given. An $R$-module whose developed Zariski topology is $T_0$, irreducible or Noetherian is studied in Section 3. \section{SOME PROPERTIES OF QUASI PRIME SUBMODULES} In this section we introduce the notion of quasi-prime submodule and find some properties of it. We also introduce the notions of quasi-primeful and quasi-prime-embedding modules and we use them in the next section. \begin{defn} A proper submodule $N$ of an $R$-module $M$ is called \emph{quasi-prime} if $(N:_R M)$ is a quasi-prime ideal of $R$. \end{defn} We define the \emph{quasi-prime spectrum} of an $R$-module $M$ to be the set of all quasi-prime submodules of $M$ and denote it by $q\Spec^R(M)$. If there is no ambiguity we write only $q\Spec(M)$ instead of $q\Spec^R(M)$. For any $I\in q\Spec(R)$, the collection of all quasi-prime submodules $N$ of $M$ with $(N:M)=I$ is designated by $q\Spec_I(M)$. We say that $R$ is a \emph{serial ring} if the set of all ideals of $R$ is linearly ordered. Recall that a ring $R$ is said to be \emph{arithmetical}, if for any maximal ideal $\p$ of $R$, $R_{\p}$ is a serial ring (see \cite{Jen66}). Recall that a module $M$ is said to be a \emph{Laskerian module}, if every proper submodule of $M$ has a primary decomposition. We know that every Noetherian module is Laskerian. \begin{rem}\label{6} (See \cite{az08}, \cite{hen02} and \cite{Jen66}) Let $I$ be an ideal in a ring $R$ and S be a multiplicatively closed subset of $R$. Then \begin{enumerate} \item If $I$ is quasi-prime, then $I$ is irreducible; \item If $R$ is a Laskerian ring, then every quasi-prime ideal is a primary ideal; \item If $I$ is a prime ideal, then $I$ is quasi-prime; \item Every proper ideal of a serial ring is quasi-prime; \item If $IR_S$ is a quasi-prime ideal of $R_S$, then $IR_S \cap R$ is a quasi-prime ideal of $R$; \item If $I$ is a quasi-prime and primary ideal of $R$ such that $I \cap S =\emptyset$, then $IR_S$ is a quasi-prime ideal of $R_S$; \item If $R$ is an arithmetical ring, $I$ is irreducible if and only if $I$ is quasi-prime; \item In an arithmetical ring $R$ any primary ideal is irreducible; \item If $R$ is a Dedekind domain, then $I$ is quasi-prime if and only if $I$ is a primary ideal. \end{enumerate} \end{rem} \begin{rem}\label{remaux} Let $M$ be an $R$-module. \begin{enumerate} \item By \cite[Proposition 4]{lu84}, every maximal submodule of an $R$-module $M$ is prime and by Remark~\ref{6}, every prime submodule of $M$ is a quasi-prime submodule. Therefore, $\Max(M)\subseteq \Spec(M)\subseteq q\Spec(M)$. So, $q\Spec(M)\neq\emptyset$ if $M$ is not primeless. \item Consider $M=\mathbb{Z}\oplus \mathbb{Z}$ as a $\mathbb{Z}$-module and $N=(2,0)\mathbb{Z}$ is the submodule of $M$ generated by $(2, 0) \in M$. Then $(N:M)=(0)\in \Spec(\mathbb{Z})$, i.e., $N\in q\Spec(M)$ though $N$ is not a $(0)$-prime submodule of $M$. Thus in general, a quasi-prime submodule need not be a prime submodule, i.e., $\Spec(M)\neq q\Spec(M)$. \item As another example, we consider the faithful torsion $\mathbb{Z}$-module $M=\bigoplus_{p} \mathbb{Z}/{p\mathbb{Z}}$, where $p$ runs through the set of all prime integers. Let $N=(\textbf{0})$ and $\p=(0)$. Then $(N:M)=(\textbf{0}:M)=\Ann(M)=(0)$. Hence, $N\in q\Spec(M)$. However, $N$ is not a prime submodule by \cite[Result 2]{lu03}, because $S_{\p}(N)=S_{(0)}(\textbf{0})=M$. \end{enumerate} \end{rem} An $R$-module $M$ is called a \emph{fully prime module} if every proper submodule is a prime submodule. In \cite[Proposition 1.10]{bkk04}, the authors give several equivalent conditions for an $R$-module $M$ to be fully prime, for example, $M$ is a fully prime $R$-module if and only if $\Ann(M)$ is a maximal ideal, i.e., if and only if $M$ is a homogeneous semisimple module (i.e., a direct sum of isomorphic simple $R$-modules). \begin{lem}\label{lemaux}\label{lemaux2} Let $J\in q\Spec(R)$, $\p\in \Spec(R)$, $I$ be a proper ideal of $R$ and $M$ be an $R$-module with submodule $N$. Let $S$ be a multiplicatively closed subset of $R$. \begin{enumerate} \item If $N\in q\Spec_J(M)$, then $(N:M)M\in q\Spec_J(M)$; \item If $\{N_\lambda\}_{\lambda\in \Lambda}$ is a family of quasi-prime submodules with $(N_\lambda:M)=J$ for each $\lambda\in \Lambda$, then $\cap_{\lambda\in \Lambda}N_\lambda\in q\Spec_J(M)$; \item If $M$ is a fully prime module, then every proper submodule of $M$ is quasi-prime. In particular, every proper subspace of a vector space over a field is quasi-prime; \item If $R$ is a serial ring, then every proper submodule of $M$ is quasi-prime;\label{n1} \item Let $N$ be a quasi-prime submodule of the $R_S$-module $M_S$. Then $N\cap M$ is a quasi-prime submodule of $M$. So, $\{N\cap M \,|\; N\in q\Spec(M_S)\}\subseteq q\Spec(M)$;\label{n2} \item Let $R$ be Laskerian and $M$ be a finitely generated $R$-module. If $N$ is a quasi-prime submodule of $M$ and $\sqrt{(N:M)}\cap S=\emptyset$, then $N_S$ is a quasi-prime submodule of $M_S$;\label{n3} \item Let $R$ be an arithmetical ring. Then every primary submodule of $M$ is quasi-prime; \item Let $R$ be an arithmetical ring. If $\p\in V^R(I)$, then $S_{\p}(I)$ is a quasi-prime ideal of $R$. Moreover, if $R$ is Laskerian, then $S_{\p}(I)$ is primary and $\p$ is a minimal prime ideal over $I$;\label{n4} \item Let $R$ be an arithmetical ring. Let $N$ be a submodule of $M$ and $\p\in Supp(M/N)$. Then $S_{\p}(N)$ is a quasi-prime submodule of $M$. Therefore, every proper saturated submodule $N$ w.r.t $\p$, is a quasi-prime submodule of $M$; \item Let $R$ be an arithmetical ring and let $M$ be a finitely generated $R$-module. If $N$ is a quasi-prime submodule of $M$ and $\p\in V^R(N:M)$, then $N_{\p}$ is a quasi-prime submodule of $M_{\p}$. \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item[(1)-(3)] are clear. \item[(4)] Every proper ideal of $R$ is quasi-prime by Remark~\ref{6}. \item[(5)] One can obtain that $((N\cap M):_R M)=(N:_{R_S} M_S)\cap R$. Now, let $I:=(N:_{R_S} M_S)\cap R$. Then $IR_S=(N:_{R_S} M_S)$ is a quasi-prime ideal of $R_S$ by assumption. By Remark~\ref{6}, $I$ is a quasi-prime ideal of $R$ so, $N\cap M$ is a quasi-prime submodule of $M$. \item[(6)]~By assumption, $(N:_R M)$ is a quasi-prime ideal and since $R$ is Laskerian, $(N:_R M)$ is primary. By Remark~\ref{6} and \cite[p. 152, Proposition 8]{nor68}, $(N_S:_{R_S} M_S)=(N:_R M)R_S$ is a quasi-prime ideal of $R_S$. So, $N_S$ is a quasi-prime submodule of $M_S$. \item[(7)] Let $N$ be a primary submodule of $M$. Then $(N:_R M)$ is a primary ideal of $R$, so is quasi-prime by Remark~\ref{6}. Hence, $N\in q\Spec(M)$. \item[(8)] $IR_{\p}$ is a proper ideal of $R_{\p}$. But $R_{\p}$ is a serial ring. Thus by Remark~\ref{6}, $IR_{\p}$ is a quasi-prime ideal of $R_{\p}$ and therefore $S_{\p}(I)=IR_{\p} \cap R$ is quasi-prime by Remark~\ref{6}. If $R$ is Laskerian, then by Remark~\ref{6}, $S_{\p}(I)$ is primary. Let $\q$ be a prime ideal of $R$ such that $I\subseteq \q \subseteq \p$. Then $$S_\p(I)\subseteq S_\p(\q) \subseteq S_\p(\p).$$ By definition, $S_\p(\q)=\q$ and $S_\p(\p)=\p$. Since $S_\p(I)$ is a $\p$-primary ideal of $R$, we have $$\p=\sqrt{S_\p(I)}\subseteq \sqrt{S_\p(\q)} \subseteq \sqrt{S_\p(\p)}=\p.$$ Therefore, $\q=\p$ and $\p$ is minimal prime ideal over $I$. \item[(9)] Since $\p\in Supp(M/N)$, $N_{\p}\neq M_{\p}$. By assumption $R_{\p}$ is a serial ring. By part~(\ref{n1}), $N_{\p}$ is a quasi-prime submodule of $M_\p$. By part~(\ref{n2}), $S_{\p}(N)=N_{\p}\cap M$ is a quasi-prime submodule of $M$. The last assertion follows from \cite[Result 2]{lu03}. \item[(10)] We have $(N_\p : M_\p) = (N :M)_\p\subseteq \p R_\p$ and $R_\p$ is a serial ring. So, $N_p$ is a quasi-prime submodule of $M_p$. \end{enumerate} \end{proof} It is shown in \cite[Proposition 2.1]{az03} that $R$ is a field if every proper submodule of $M$ is a prime submodule of $M$ and $S_{(0)}(\textbf{0})\neq M$. In the following, we give an example that shows it is not the case for any quasi-prime submodule. \begin{eg}\label{egzpi} (1) Every proper submodule of the $\mathbb{Z}$-module $M=\mathbb{Z}(p^\infty)$ is a quasi-prime submodule, in which $p$ is a prime integer. For, $(L:_{\mathbb{Z}}M)=(0)$ where $L$ is a submodule of $M$ (see \cite[p.~3745]{lu95}). (2) Let $R$ be an integral domain which is not a field and $K$ be the field of quotients of $R$. Then every proper submodule of $K$ is a quasi-prime submodule. Since $xK=K$ for every nonzero element $x\in R$, $(N:K)=(0)$ for every proper submodule $N$ of $K$. \end{eg} \begin{thm}\label{thm2.16} Let $M$ be a finitely generated $R$-module and let $I$ be a primary quasi-prime ideal of $R$. If $S$ is a multiplicatively closed subset of $R$ such that $I\cap S=\emptyset$, then the map $N \mapsto N_S$ is a surjection from $q\Spec_I(M)$ to $q\Spec_{IR_S}(M_S)$. \end{thm} \begin{proof} Let $N\in q\Spec_I(M)$. Since $M$ is finitely generated and $I\cap S=\emptyset$ we have $IR_S=(N:_R M)R_S=(N_S:_{R_S} M_S)\neq R_S$. By Remark~\ref{6}, $IR_S$ is a quasi-prime ideal of $R_S$. Therefore, $N_S$ is a quasi-prime submodule of $M_S$. Let $L$ be a quasi-prime submodule of $M_S$ with $(L:_{R_S}M_S)=IR_S$. By Lemma~\ref{lemaux}(\ref{n2}), $L\cap M$ is a quasi-prime submodule of $M$. Moreover, using that $I$ is primary we have $$I=IR_S\cap R=(L:_{R_S} M_S)\cap R=((L\cap M):_R M).$$ So, $L\cap M$ is a quasi-prime submodule of $M$. \end{proof} \begin{cor} Let $M$ be a finitely generated $R$-module and $\p\in \Spec(R)$. \begin{enumerate} \item Let $I$ be a $\p$-primary quasi-prime ideal of $R$. Then the map $N \mapsto N_\p$ is a surjection from $q\Spec_I(M)$ to $q\Spec_{IR_{\p}}(M_{\p})$. \item The map $N \mapsto N_\p$ is a surjection from $q\Spec_{\p}(M)$ to $q\Spec_{\p R_{\p}}(M_{\p})=\Spec_{\p R_{\p}}(M_{\p})$. \item Let $N$ be a quasi-prime submodule of $M$ with $(N:M)=\p$. Then $S_{\p}(N)$ is a prime submodule minimal over $N$ and any other $\p$-prime submodule of $M$ containing $N$, must contain $S_{\p}(N)$. \end{enumerate} \end{cor} \begin{proof} (1) and (2) follows from Theorem~\ref{thm2.16}. For establish (3), note that by part~(2), $N_{\p}$ is a $\p R_{\p}$-prime submodule of $M_{\p}$ and by \cite[Proposition~1]{lu95}, $S_{\p}(N)=N_{\p}\cap M$ is a $\p$-prime submodule of $M$. Now the results follows from \cite[Result~3]{lu03}. \end{proof} \begin{defn} Let $M$ be an $R$-module. For a submodule $N$ of $M$ we define \begin{eqnarray*} D^M(N) &=& \{ L\in q\Spec(M)\mid (L:M)\supseteq (N:M)\}, \\ \Omega^M(N) &=& \{L\in q\Spec(M)\mid L\supseteq N\} . \end{eqnarray*} \end{defn} If there is no ambiguity we write $D(N)$ (resp. $\Omega (N)$) instead of $D^M(N)$ (resp. $\Omega^M(N)$). \begin{lem} Let $M$ be an $R$-module with $ q\Spec(M)=\emptyset$. Then $\p M = M$ for every maximal ideal $\p$ of $R$. On the other hand, if $IM = M$ for every $I\in D^R(\Ann(M))$, then $q\Spec(M)=\emptyset$. \end{lem} \begin{defn} When $ q\Spec(M) \neq\emptyset$, the map $\psi : q\Spec(M)\rightarrow q\Spec(R/\Ann(M))$ defined by $ \psi(L) = (L : M)/\Ann(M)$ for every $L\in q\Spec(M)$, will be called the natural map of $ q\Spec(M)$. An $R$-module $M$ is called \emph{quasi-primeful} if either $M =(\textbf{0})$ or $M\neq (\textbf{0})$ and has a surjective natural map. \end{defn} \begin{eg}\label{paralleleg} Let $\Sigma:=q\Spec(\mathbb{Z})\setminus \{(0)\}$. Consider the $\mathbb{Z}$-module $M=\bigoplus_{I\in \Sigma} \mathbb{Z}/{I}$. We will show that $M$ is a quasi-primeful $\mathbb{Z}$-module. Note that $(\textbf{0}:M)=\Ann(M)=(0)$. So, $(\textbf{0})\in q\Spec_{(0)}(M)$. On the other hand, for each nonzero quasi-prime ideal $I$ of $\mathbb{Z}$, we have $(IM:M)=I\in q\Spec(\mathbb{Z})$. This implies that $IM\in q\Spec_I(M)$. We conclude that $M$ is a quasi-primeful $\mathbb{Z}$-module. \end{eg} Let $Y$ be a subset of $ q\Spec(M)$ for an $R$-module $M$. We will denote the intersection of all elements in $Y$ by $\Im(Y )$. \begin{prop}\label{free} Let $F$ be a free $R$-module and $I$ be a quasi-prime ideal of $R$. Then \begin{enumerate} \item $IF$ is a quasi-prime submodule, i.e., $F$ is quasi-primeful; \item $IF=\Im(q\Spec_I(F))$; \item If $F$ has primary decomposition for submodules, then $I$ is primary. \end{enumerate} \end{prop} \begin{proof} (1) Since $F$ is free we have $I=(IF:F)$, so that $IF$ is a quasi-prime submodule. (2) This is clear by (1). For (3), Let $\cap_{i=1}^n Q_i$ be a primary decomposition of $IF$, where each $Q_i$ is a $\p_i$-primary submodule of $F$. Then $I=(IF:F)=\cap_{i=1}^n (Q_i:F)$. Since $I$ is quasi-prime, $I=(Q_j:F)$ for some $1\leq j\leq n$. Hence, $I$ is primary since $Q_j$ is a primary submodule. \end{proof} \begin{lem}\label{directsum} Let $M$, $M_1$, $M_2$ be $R$-modules such that $M = M_1\oplus M_2$ and $I\in D^R(\Ann(M))$. If $N\in q\Spec_I(M_1)$ (resp. $N\in q\Spec_I(M_2)$), then $N\oplus M_2\in q\Spec_I(M)$ (resp. $M_1\oplus N\in q\Spec_I(M)$). In particular, every direct sum of a finite number of quasi-primeful $R$-modules is quasi-primeful over $R$. \end{lem} \begin{proof} This is straightforward and we omit it. \end{proof} \begin{thm}\label{mapa} Let $M$ be an $R$-module. Then $M$ is quasi-primeful in each of the following cases: \begin{enumerate} \item $R$ is $PID$ and $M$ is finitely generated; \item $R$ is Dedekind domain and $M$ is faithfully flat; \item $R$ is Laskerian and $M$ is locally free. \end{enumerate} \end{thm} \begin{proof} (1) Let $N$ be a cyclic submodule of $M$ and $I\in D^R(\Ann(N))$. Then $N=R/\Ann(m)$ for some $m\in N$ and $(I/\Ann(N):N)=I$. Hence, $N$ is quasi-primeful. It is well-known that a finitely generated module over a $PID$ is finite direct sum of cyclic submodules. Hence, in the light of Lemma~\ref{directsum}, $M$ is quasi-primeful. (2) Let $J\in q\Spec(R)$. Since $M$ is faithfully flat, $JM\neq M$ and by Remark~\ref{6}, $J$ is primary. So, $JM$ is a primary submodule by ~\cite[Theorem 3]{lu84}, and $(JM:M)=J$ is a quasi-prime ideal of $R$, i.e., $JM$ is quasi-prime. (3) Let $I\in D(\Ann(M))$. Since $R$ is Laskerian, $\p:=\sqrt{I}$ is a prime ideal of $R$ and $IR_{\p}$ is a quasi-prime ideal of $R_{\p}$ by Remark~\ref{6}(6). Since $M_{\p}$ is a free $R_{\p}$-module, there exists a quasi-prime submodule $N$ of $M_{\p}$ such that $(N:_{R_{\p}}M_{\p})=IR_{\p}$ by Proposition~\ref{free}. Now, $(N\cap M:M)=IR_{\p}\cap R=I$ by Lemma~\ref{lemaux}. This implies that $M$ is quasi-primeful. \end{proof} We note that not every quasi-primeful module is finitely generated. For example, every (finite or infinite dimensional) vector space is quasi-primeful. \begin{rem}\label{multi}(See \cite[Theorem 3.1]{zb88}) Let $M$ be a faithful multiplication module over $R$. Then $M$ is finitely generated if and only if $\m M\neq M$ for every maximal ideal $\m$ of $R$. \end{rem} \begin{prop}\label{pva}\label{paracor} Let $M$ be a nonzero quasi-primeful $R$-module. \begin{enumerate} \item Let $I$ be a radical ideal of $R$. Then $(IM : M )= I$ if and only if $\Ann(M)\subseteq I$; \item $\p M\in q\Spec(M)$ for every $\p\in V(\Ann(M))$; \item $\p M\in \Spec_{\p}(M)$ for every $\p\in V(\Ann(M))\cap \Max(R)$; \item If $\dim(R)=0$, then $M$ is primeful; \item If $M$ is multiplication, then $M$ is finitely generated. \end{enumerate} \end{prop} \begin{proof} (1) The necessity is clear. For sufficiency, we note that $\Ann(M)\subseteq I=\cap_i~\p_i$, where $\p_i$ runs through $V^R (I)$ since $I$ is a radical ideal. On the other hand, $M$ is quasi-primeful and $\p_i\in D(\Ann(M))$ so, there exists a quasi-prime submodule $L_i$ such that $(L_i:M)=\p_i$. Now, we obtain that $$I \subseteq (IM : M) = ((\cap_i ~\p_i)M : M)\subseteq \cap_i~(\p_iM : M)\subseteq \cap_i~(L_i: M) = \cap_i~ \p_i = I.$$ Thus $(IM : M) = I$. (2) and (3) follows from part~(1). For (4), let $\p\in V^R(\Ann(M))$. Then by part~(3), $\p M\neq M$ and by \cite[Result 3]{lu07}, $M$ is primeful. (5) Since $M$ is a faithful multiplication module over $R/\Ann(M) = \bar{R}$ and $\bar{\m}M\neq M$ for every $\bar{\m}\in \Max(\bar{R})$ by (3), $M$ is finitely generated over $\bar{R}$ by Remark~\ref{multi}. Hence, $M$ is finitely generated over $R$. \end{proof} \begin{cor} Let $M$ be an $R$-module. \begin{enumerate} \item Let $M$ be a quasi-primeful $R$-module. If $I$ is an ideal of R contained in the Jacobson radical $\Rad(R)$ such that $IM = M$, then $M = (\textbf{0})$. \item Let $R$ be a $PID$ and $M$ be torsion-free. Then $M$ is quasi-primeful if $pM\neq M$ for every irreducible element $p\in R$. \item If $M$ is faithful quasi-primeful, then $M$ is flat if and only if $M$ is faithfully flat. \item If $M$ is projective and $R$ is Laskerian, then $M$ is quasi-primeful. \end{enumerate} \end{cor} \begin{proof} (1) Suppose that $M\neq(\textbf{0})$. Then $\Ann(M)\neq R$. If $\m$ is any maximal ideal containing $\Ann(M)$, then $I \subseteq \Rad(R)\subseteq \m$ and $IM = M = \m M$ whence $(\m M : M) = R \neq \m$, a contradiction to Proposition~\ref{pva}. (2) If for every irreducible element $p\in R$, $pM\neq M$, then $M$ is faithfully flat and by Theorem~\ref{mapa}, $M$ is quasi-primeful. (3) The sufficiency is clear. Suppose that $M$ is flat. By Proposition~\ref{pva}, for every $\p\in \Max(R)\subseteq D(0)$, $\p M\neq M$. This implies that $M$ is faithfully flat. (4) Since every projective module is locally free, by Theorem~\ref{mapa}, $M$ is quasi-primeful. \end{proof} \begin{eg} The $\mathbb{Z}$-module $\mathbb{Q}$ is a flat and faithful, but not faithfully flat. So, $\mathbb{Q}$ is not quasi-primeful. \end{eg} We give an elementary example of a module which is not quasi-primeful. If $R$ is a domain, then an $R$-module $M$ is \emph{divisible} if $M = rM$ for all nonzero elements $r\in R$. We note that every injective module is divisible. \begin{prop}\label{divi} Let $R$ be a domain which is not a field. Then every nonzero divisible $R$-module is not quasi-primeful. \end{prop} \begin{proof} By assumption $\Ann(M)=(0)$ and there exists a nonzero prime ideal $\p$ of $R$. Hence $\p\in V^R(\Ann(M))$ and $\p M = M$. Therefore, $M$ is not quasi-primeful by Proposition~\ref{paracor}. \end{proof} \begin{prop} Let $R$ be a domain over which every module is quasi-primeful. Then $R$ is a field. \end{prop} \begin{proof} Suppose that $R$ is not a field. Then its field $K$ of quotients is a nonzero divisible $R$-module. Hence, $K$ is not quasi-primeful over $R$ by Proposition~\ref{divi}, which is a contradiction to the definition of $R$. \end{proof} An $R$-module $M$ is called \emph{weak multiplication} if $\Spec(M) =\emptyset$ or for every prime submodule $N$ of $M$, we have $N = IM$, where $I$ is an ideal of $R$. One can easily show that if $M$ is a weak multiplication module, then $N = (N : M)M$ for every prime submodule $N$ of $M$ (\cite{Abu95} and \cite{az03}). As is seen in \cite{Abu95}, $\mathbb{Q}$ is a weak multiplication $\mathbb{Z}$-module which is not a multiplication module. \begin{defn} An $R$-module $M$ is called \textit{quasi-prime-embedding}, if the natural map $\psi : q\Spec(M)\rightarrow q\Spec(R/\Ann(M))$ is injective. \end{defn} We will show that every cyclic module is quasi-prime-embedding (Corollary~\ref{maincor}). Thus any ring $R$ as $R$-module is quasi-prime-embedding. \begin{prop}\label{t0inj} The following statements are equivalent for any $R$-module $M$: \begin{enumerate} \item $M$ is quasi-prime-embedding; \item If $D (L) = D (N)$, then $L = N$, for any $L, N\in q\Spec(M)$; \item $|q\Spec_I(M)| \leq 1$ for every $I\in q\Spec(R)$. \end{enumerate} \end{prop} \begin{proof} $(1)\Rightarrow (2)$ Let $D(L) = D(N)$. Then $(L : M) = (N : M )$. Now by (1), $L = N$. $(2)\Rightarrow (3)$ Suppose that $L, N\in q\Spec_I(M)$ for some $I\in q\Spec(R)$. Hence $(L : M) = (N : M )= I$ and so, $D(L) = D(N)$. Thus, $L = N$ by (2). $(3)\Rightarrow (1)$ Let $\bar{I}:=\psi(L)=\psi(N)$. Then $I=(L : M) = (N : M )$. By (3), $L = N$, and so $\psi$ is injective. \end{proof} \begin{cor}\label{maincor} Consider the following statements for an $R$-module $M$ : \begin{enumerate} \item $M$ is multiplication; \item $M$ is quasi-prime-embedding; \item $M$ is weak multiplication; \item $|\Spec_{\p}(M)|\leq 1$ for every prime ideal $\p$ of $R$; \item $M/pM$ is cyclic for every maximal ideal $\p$ of $R$. \end{enumerate} Then $(1)\Rightarrow (2)\Rightarrow (3)\Rightarrow (4)\Rightarrow (5)$. Further, if $M$ is finitely generated, then $(5)\Rightarrow (1)$. \end{cor} \begin{proof} $(1)\Rightarrow (2)$ Let $D(N)=D(L)$ for $N$, $L\in q\Spec(M)$. Then $(N:M)=(L:M)$ and since $M$ is multiplication, $N=L$. Therefore, (2) follows from Proposition~\ref{t0inj}. $(2)\Rightarrow (3)$ Let $P$ be a $\p$-prime submodule of $M$. By Lemma~\ref{lemaux}, $(P:M)M\in q\Spec_{\p}(M)$. Combining this fact with Proposition~\ref{t0inj}, we obtain that $P=(P:M)M$. This yields $M$ is weak multiplication. $(3)\Rightarrow (4)$ The case $\Spec_{\p}(M)=\emptyset$ is trivially true. Let $P$, $Q\in \Spec_{\p}(M)$ for some prime ideal $\p$ of $R$. Then $(P:M)=(Q:M)$. Therefore $P=(P:M)M=(Q:M)M=Q$. The $(4)\Rightarrow (5)$ and last statement is true due to ~\cite[Theorem~3.5]{mms97}. \end{proof} An $R$-module $M$ is called \emph{locally cyclic} if $M_{\p}$ is a cyclic module over the local ring $R_{\p}$ for every prime ideal $\p$ of $R$. Multiplication modules are locally cyclic (see \cite[Theorem 2.2]{zb88}). \begin{thm}\label{freet0} Let $M$ be an $R$-module and let $S$ be a multiplicatively closed subset of $R$. \begin{enumerate} \item If $M$ is Laskerian quasi-prime-embedding, then every quasi-prime submodule of $M$ is primary (see \cite[Theorem 2.1]{az08}). \item Let $R$ be a serial ring. Then $M$ is multiplication if and only if $M$ is quasi-prime-embedding.\label{serial} \item If $M$ is quasi-prime-embedding, then $S^{-1}M$ is also a quasi-prime-embedding $S^{-1}R$-module.\label{t0local} \item If $M$ is free, Then $M$ is quasi-prime-embedding if and only if $M$ is cyclic.\label{t0fr} \item If $M$ is projective quasi-prime-embedding, then $M$ is locally cyclic. \item If $R$ is an arithmetical ring and $M$ is quasi-prime-embedding, then $M$ is locally cyclic.\label{arith} \item Let $R$ be a semi-local arithmetical ring. Then $M$ is cyclic if and only if $M$ is quasi-prime-embedding. \item A finitely generated module $M$ is locally cyclic if and only if $M$ is multiplication if and only if $M$ is quasi-prime-embedding. \item Let $R$ be a Dedekind domain and $M$ be a non-faithful quasi-prime-embedding \, $R$-module. Then $M$ is cyclic. \end{enumerate} \end{thm} \begin{proof} (1) Let $P$ be a quasi-prime submodule of $M$ and $\bigcap_{i=1}^m N_i$ be a primary decomposition for $P$. Since $P$ is quasi-prime, $$(N_j:M)\subseteq (P:M)=\bigcap_{i=1}^m (N_i:M)\subseteq (N_j:M)$$ for some $1\leq j\leq m$. Hence, $N_j$ is a quasi-prime submodule and by Proposition~\ref{t0inj}, $P=N_j$.\\ (2) The necessity follows from Corollary~\ref{maincor}. Let $N$ be a proper submodule of $M$. By Lemma~\ref{lemaux}, $N$ and $(N:M)M$ are quasi-prime submodules of $M$. Therefore, $N=(N:M)M$ by Proposition~\ref{t0inj}, and so $M$ is multiplication.\\ (3) Use Lemma~\ref{lemaux} and Proposition~\ref{t0inj}.\\ (4) If $M$ is cyclic, then $M$ is quasi-prime-embedding by Corollary~\ref{maincor}. We assume that $M$ is quasi-prime-embedding and $M$ is not cyclic. Hence, $M=\oplus_{i\in I} R$, where $|I|> 1 $. Let $\p\in q\Spec(R)$ and $\alpha, \beta$ be two distinct elements of $I$. It is easy to see that $$N=\p\oplus(\oplus_{\substack{i\in I\\i\neq \alpha}} R) \quad \text{ and }\quad L=\p\oplus(\oplus_{\substack{i\in I\\i\neq \beta}} R)$$ are two distinct quasi-prime submodules of $M$ with $(N:M)=(L:M)=\p$. By Proposition~\ref{t0inj}, $N=L$, a contradiction.\\ (5) Let $\p\in \Spec(R)$. Then by (\ref{t0local}), $M_{\p}$ is quasi-prime-embedding. On the other hand, $M_{\p}$ is a free $R_{\p}$-module. Hence, $M_{\p}$ is a cyclic $R_{\p}$-module by (\ref{t0fr}).\\ (6) For each $\p\in \Spec(R)$, $R_P$ is a serial ring by \cite[Theorem~1]{Jen66}, and $M_{\p}$ is quasi-prime-embedding by (\ref{t0local}). By (\ref{serial}), $M_{\p}$ is a multiplication $R_{\p}$-module. Therefore, $M_{\p}$ is cyclic, since $R_{\p}$ is a quasi-local ring.\\ (7) Let $\m_1, \cdots , \m_t$ be all maximal ideals of $R$. By (\ref{arith}), $M_{\m_i}$ is a cyclic $R_{\m_i}$-module for each $i$. Hence, $M$ is cyclic by \cite[Lemma 3]{Bar81}. Other side is true by Corollary~\ref{maincor}.\\ (8) Use \cite[Proposition 5]{Bar81} and Corollary~\ref{maincor}.\\ (9) By assumption there exist only finitely many prime (maximal) ideal containing $\Ann(M)$. So, by (\ref{arith}), and \cite[Lemma 3]{Bar81}, $M$ is cyclic. \end{proof} A submodule $S$ of an $R$-module $M$ will be called \emph{semiprime} if $S$ is an intersection of prime submodules. A prime submodule $K$ of $M$ is said to be \emph{extraordinary} if whenever $N$ and $L$ are semiprime submodules of $M$ with $N\cap L \subseteq K$, then $N\subseteq K$ or $L\subseteq K$. An $R$-module $M$ is said to be a \emph{top module} if every prime submodule of $M$ is extraordinary. Every multiplication or locally cyclic module is a top module (see \cite{mms97}). Corollary~\ref{maincor} and Theorem~\ref{freet0} are very interesting for us, because there is a close relationship between those and top modules. We find the relations between parts (1)-(4) of Corollary~\ref{maincor} and top modules. By \cite[Theorem~3.5]{mms97}, every multiplication module is top. So we consider part (2) of Corollary~\ref{maincor}. By Theorem~\ref{freet0}, every projective quasi-prime-embedding module and every quasi-prime-embedding module over arithmetical ring is locally cyclic, so is top due to \cite[Theorem~4.1]{mms97}. In the next theorem we will show the relationship between part~(3) and part~(4) of Corollary~\ref{maincor} and top modules. \begin{thm}\label{maintop} Let $R$ be a one dimensional Noetherian domain and let $M$ be a nonzero $R$-module. Then $M$ is a top module in each of the following cases: \begin{enumerate} \item $M$ is weak multiplication. \item For every prime ideal $\p\in \Spec(R)$, $|\Spec_{\p}(M)|\leq 1$ and $S_{(0)}(\textbf{0})\subseteq \rad(\textbf{0})$. \end{enumerate} \end{thm} \begin{proof} \begin{enumerate} \item Let $P$ be a $\p$-prime submodule of $M$ and let $N$ and $L$ be non-zero semiprime submodules of $M$ such that $N \cap L \subseteq P$. It is enough to show that $N \subseteq P$ or $L\subseteq P$. If $(N:M)$ or $(L:M)$ $\not\subseteq (P:M)$, then $L\subseteq P$ or $N\subseteq P$ by \cite[Lemma~2]{lu89}. Hence, we consider just the case that $(L:M)\subseteq (P:M)$ and $(N:M)\subseteq (P:M)$. Now, we are going to show that if $N\not\subseteq P$, then $L \subseteq P$. For that, choose $x \in N \backslash P$. So, $x\not\in L$. If $(L:x)=(0)$, then $x+L \not\in S_{(0)}(\textbf{0}_{M/L})$, so $S_{(0)}(\textbf{0}_{M/L})\neq M/L$. Since $M$ is weak multiplication, it follows that $M/L$ is also a weak multiplication module. But every weak multiplication module over an integral domain is either torsion or torsion-free (see \cite[Proposition 3]{az03}). Hence $M/L$ is a torsion-free $R$-module. On the other hand, we have $(L:M) \subseteq (L:x)=(0)$. Thus $L \in \Spec_{(0)}M$ by \cite[Theorem 1]{lu84}. Therefore $L=(0)M=(\textbf{0}) \subseteq P$ as desired. Now let $(L:x)\neq (0)$ and $L=\bigcap_{\lambda \in \Lambda}P_{\lambda}$, where $P_{\lambda}$ are $\p_\lambda$-prime submodules of $M$ for each $\lambda \in \Lambda$. By assumption $P_{\lambda}=\p_{\lambda}M$. This implies that $$(L:x)=(\bigcap_{\lambda\in \Lambda}\p_{\lambda}M:x)=\bigcap_{\lambda\in \Lambda}(\p_{\lambda}M:x).$$ Suppose that $\Lambda'$ be a subset of $\Lambda$ such that for each $\lambda\in\Lambda'$, $x\not\in \p_{\lambda}M$. Since $x\not\in L$, hence $\Lambda'\neq\emptyset$. Now by \cite[Lemma~2.12]{mms02} and since $\dim(R)=1$, $$(0)\neq(L:x)=\bigcap_{\lambda\in \Lambda'}(\p_{\lambda}M:x)=\bigcap_{\lambda\in \Lambda'}\p_{\lambda}\subseteq(P:M).$$ Therefore, $(L:x)$ is a nonzero ideal of $R$, and so it is contained in only finitely many prime ideal by \cite[Proposition 9.1]{ati69}. Thus, $\Lambda'$ is a finite set. It follows that there exists $\q \in\Lambda'$ such that $\q\subseteq \p$. This yields $L\subseteq \p M=P$ as desired. \item If $S_{(0)}(\textbf{0})=M$, then $\rad(\textbf{0})=M$, i.e., $\Spec(M)=\emptyset$, and so we are done. Therefore, we assume that $S_{(0)}(\textbf{0})\neq M$. In this case $S_{(0)}(\textbf{0})$ is a $(0)$-prime submodule of $M$ by \cite[Lemma 4.5]{lu03}. We are going to show that every prime submodule of $M$ is extraordinary. Let $P$ be a prime submodule of $M$ and let $N$ and $L$ be two nonzero semiprime submodules of $M$ such that $N\cap L \subseteq P$. In view of above arguments we take $x\in N\setminus P$. If $(L:x)=(0)$, then $(L:M)=(0)$ and by \cite[Result 1]{lu03}, $$S_{(0)}(\textbf{0}_{M/L})=S_{(0)}(\textbf{0})/L\subseteq \rad(\textbf{0})/L=(\textbf{0}).$$ Therefore, $M/L$ is a torsion-free $R$-module and $L$ is a $(0)$-prime submodule of $M$ by \cite[Theorem 1]{lu84}. By assumption of this part, $L=S_{(0)}(\textbf{0})\subseteq \rad(\textbf{0})\subseteq P$. Let $(L:x)\neq(0)$ and let $\{P_{\lambda}\}_{\lambda\in \Lambda}$ be a collection of $\p_{\lambda}$-prime submodules of $M$ such that $L=\bigcap_{\lambda\in \Lambda}P_{\lambda}$. If $\p_k=(0)$ for some $k\in \Lambda$, then $(P_k:M)=(S_{(0)}(\textbf{0}):M)=(0)$. Hence, $L\subseteq P_k=S_{(0)}(\textbf{0})\subseteq \rad(\textbf{0})\subseteq P$. Therefore, we may assume that $\p_\lambda\neq (0)$ for each $\lambda \in \Lambda$. Since $\dim(R)=1$, we have $\p_{\lambda}=(\p_{\lambda}M:M)=(P_{\lambda}:M)$. Therefore, $\p_{\lambda}M$ is a $\p_{\lambda}$-prime submodule of $M$ by \cite[Proposition 2]{lu84}. By assumption of this part, $P_{\lambda}=\p_{\lambda}M$. This implies that $$(L:x)=(\bigcap_{\lambda\in \Lambda}\p_{\lambda}M:x)=\bigcap_{\lambda\in \Lambda}(\p_{\lambda}M:x).$$ Suppose that $\Lambda'$ be a subset of $\Lambda$ such that for each $\lambda\in\Lambda'$, $x\not\in \p_{\lambda}M$. Since $x\not\in L$, hence $\Lambda'\neq\emptyset$. Now, from the \cite[Lemma~2.12]{mms02}, we have, $$(0)\neq(L:x)=\bigcap_{\lambda\in \Lambda'}(\p_{\lambda}M:x)=\bigcap_{\lambda\in \Lambda'}\p_{\lambda}\subseteq(P:M).$$ By \cite[Proposition 9.1]{ati69}, $(L:x)$ is contained in finitely many prime ideal, i.e., $\Lambda'$ is finite. So, there exists some $\lambda\in \Lambda'$ such that $\p_{\lambda}\subseteq (P:M)$. Therefore, $L\subseteq P$. \end{enumerate} \end{proof} The next example shows that Part~(1) of Theorem \ref{maintop} is different from Part~(2). \begin{eg} Consider the $\mathbb{Z}$-module $M=\mathbb{Z}(p^{\infty})\oplus \mathbb{Z}$. It is easy to see that for every prime ideal $\p\in \Spec(\mathbb{Z})$, $|\Spec_{\p}(M)|\leq 1$ and $S_{(0)}(\textbf{0})= \rad(\textbf{0})$. By Theorem~\ref{maintop}, $M$ is a top module. We note that M is not weak multiplication. \end{eg} \section{SOME TOPOLOGICAL PROPERTIES OF $ q\Spec(M)$} Let $M$ be an $R$-module. Then for submodules $N$, $L$ and $N_i$ of $M$ we have \begin{enumerate} \item $D(\textbf{0})= q\Spec(M)$ and $D(M) =\emptyset$, \item $\bigcap_{i\in I}D (N_i) = D (\sum_{i\in I}(N_i : M)M)$, \item $D (N)\cup D (L) = D (N \cap L)$. \end{enumerate} Now, we put $$\zeta(M)=\{ \, D(N) \mid N\leq M \, \}.$$ From (1), (2) and (3) above, it is evident that for any module $M$ there exists a topology, $\tau$ say, on $ q\Spec(M)$ having $\zeta(M)$ as the family of all closed sets. The topology $\tau$ is called the \emph{developed Zariski topology on} $ q\Spec(M)$. For the reminder of this paper, for every ideal $I\in D(\Ann(M))$, $\overline{R}$ and $\overline{I}$ will denote respectively $R/\Ann(M)$ and $I/\Ann(M)$. Let $Y$ be a subset of $ q\Spec(M)$ for an $R$-module $M$. We will denote the intersection of all elements in $Y$ by $\Im(Y )$ and the closure of $Y$ in $ q\Spec(M)$ with respect to the developed Zariski topology by $Cl(Y)$. The proof of next lemma is easy. \begin{lem}\label{lemtop} Let $I$ be a proper ideal of $R$ and $M$ be an $R$-module with submodules $N$ and $L$. Then we have \begin{enumerate} \item If $(N : M) = (L : M)$, then $D(N)=D(L)$. The converse is also true if both $N$ and $L$ are quasi-prime submodules of $M$; \item $D(N)=\bigcup_{I\in D^R(N:M)} q\Spec_I(M)$; \item $D(N)=D((N:M)M)=\Omega^M((N:M)M)$; \item Let $Y$ be a subset of $q\Spec(M)$. Then $Y\subseteq D(N)$ if and only if $(N:M)\subseteq (\Im(Y):M)$. \end{enumerate} \end{lem} \begin{prop}\label{cont}\label{opcl} Let $M$ be an $R$-module and $\psi : q\Spec(M)\rightarrow q\Spec(R/\Ann(M))$ be the natural map. \begin{enumerate} \item The natural map $\psi$ is continuous with respect to the developed Zariski topology. \item If $M$ is quasi-primeful, then $\psi$ is both closed and open; more precisely, for every submodule $N$ of $M$, $ \psi(D^M(N)) =D^{\overline{R}}(\overline{(N : M)})$ and $ \psi( q\Spec(M)-D^M(N)) = q\Spec(\bar{R})-D^{\overline{R}}(\overline{(N : M)})$. \item $\psi$ is bijective if and only if it is a homeomorphism. \end{enumerate} \end{prop} \begin{proof} (1) Let $I$ be an ideal of $R$ containing $\Ann(M)$ and let $L\in \psi^{-1}(D^{\overline{R}}(\bar{I}))$. There exists some $\bar{J}\in D^{\overline{R}}(\bar{I})$ such that $ \psi(L)=\bar{J}$. Hence, $J=(L:M)\supseteq I$ and $L\in D^M (IM)$. Now, let $K\in D^M (IM)$. Then $(K:M)\supseteq(IM:M)\supseteq I$, and so $K\in \psi^{-1}(D^{\overline{R}}(\bar{I}))$. Consequently, $\psi^{-1}(D^{\overline{R}}(\bar{I})) = D^M(IM)$, i.e., $\psi$ is continuous. (2) By part~(1), $\psi$ is a continuous map such that $\psi^{-1}(D^{\overline{R}}(\bar{I})) = D^M (IM)$ for every ideal $I$ of $R$ containing $\Ann(M)$. Hence, for every submodule $N$ of $M$, $\psi^{-1}(D^{\overline{R}}(\overline{(N : M)})) = D^M((N : M)M) = D^M (N)$. Since the natural map $\psi$ is surjective, $ \psi(D^M(N)) =\psi o \psi^{-1}(D ^{\overline{R}}(\overline{(N : M)})) = D^{\overline{R}}(\overline{(N : M)})$. Similarly, $ \psi( q\Spec(M)-D^M(N)) = q\Spec(\bar{R})-D^{\overline{R}}(\overline{(N : M)})$. (3) This follows from (1) and (2). \end{proof} \begin{thm} Let $M$ be a quasi-primeful $R$-module. Then the following statements are equivalent: \begin{enumerate} \item $ q\Spec(M)$ is connected; \item $ q\Spec(\bar{R})$ is connected; \item The ring $\bar{R}$ contains no idempotent other than $\bar{0}$ and $\bar{1}$. \end{enumerate} \noindent Consequently, if $R$ is a quasi-local ring, then both $q\Spec(M)$ and $ q\Spec(\bar{R})$ are connected. \end{thm} \begin{proof} $(1) \Rightarrow (2)$ follows from that $\psi$ is a surjective and continuous. For $(2) \Rightarrow (1)$, we assume that $ q\Spec(\bar{R})$ is connected. If $ q\Spec(M)$ is disconnected, then $ q\Spec(M)$ must contain a non-empty proper subset $Y$ that is both open and closed. Accordingly, $ \psi(Y)$ is a non-empty subset of $ q\Spec(\bar{R})$ that is both open and closed by Proposition \ref{opcl}. To complete the proof, it suffices to show that $ \psi(Y)$ is a proper subset of $ q\Spec(\bar{R})$ so that $ q\Spec(\bar{R})$ will be disconnected. Since $Y$ is open, $Y = q\Spec(M)- D^M(N)$ for some $N\leq M$ whence $ \psi(Y) = q\Spec(\bar{R})-D^{\overline{R}}(\overline{(N : M)})$ by Proposition \ref{opcl} again. Therefore, if $ \psi(Y) = q\Spec(\bar{R})$, then $D^{\overline{R}}(\overline{(N : M)})=\emptyset$, and so $\overline{(N : M)}=\bar{R}$, i.e., $N = M$. It follows that $Y = q\Spec(M)-D^M(N) = q\Spec(M)-D^M(M)= q\Spec(M)$ which is impossible. Thus $ \psi(Y)$ is a proper subset of $ q\Spec(\bar{R})$. For $(2) \Leftrightarrow (3)$, it is enough for us to show that $q\Spec(R)$ is disconnected if and only if $R$ has an idempotent $e \neq 0 ,1$. Suppose that $e \neq 0 ,1$ is an idempotent in $R$. Hence $R=Re\oplus R(1-e)$. It follows that $q\Spec(R)=(q\Spec(R)\setminus D^R(Re))\cup (q\Spec(R)\setminus D^R(R(1-e)))$ and $\emptyset=(q\Spec(R)\setminus D^R(Re))\cap (q\Spec(R)\setminus D^R(R(1-e)))$. This implies that $q\Spec(R)$ is disconnected. Now, we assume that $q\Spec(R)$ is disconnected. Thus $q\Spec(R)=D^R(I)\cup D^R(J)$ where $I$ and $J$ are two ideals of $R$. We have that $q\Spec(R)=D^R(I\cap J)$ and so, $I\cap J\subseteq \Im(q\Spec(R))$. Also, $\emptyset=D^R(I)\cap D^R(J)=D^R(I+ J)$. This implies that $I+J=R$. There exist $a\in I$ and $b\in J$ such that $a+b=1$. On the other hand, $$ab\in IJ\subseteq I\cap J\subseteq \Im(q\Spec(R))\subseteq\sqrt{(0)}.$$ So, $(ab)^n=0$ for some $n\in \mathbb{N}$. We have $1=(a+b)^n=a^n+b^n+abx$ where $x\in R$. Since $abx\in \sqrt{(0)}\subseteq \Rad(R)$, $a^n+b^n$ is a unit in $R$. Let $u$ be the inverse of $a^n+b^n$. Note that $ua^nb^n=0$. Thus $$ua^n=ua^n(u(a^n+b^n))=u^2a^{2n}+u^2a^nb^n=(ua^n)^2.$$ Similarly, $ub^n=(ub^n)^2$. If $ua^n=0$, then $a^n=0$, and so $1=b(b^{n-1}+ax)\in J$ which is contradiction because $D^R(J)\neq \emptyset$. Consequently, $ua^n$ and $ub^n$ are nonzero. On the other hand, if $ua^n=ub^n=1$, then $1=u(a^n+b^n)=ua^n+ub^n=1+1$, which is contradiction. We conclude that either $ua^n$ or $ub^n$ is idempotent. \end{proof} \begin{prop}\label{Cl}\label{Cl2}\label{t1} Let $M$ be an $R$-module, $Y\subseteq q\Spec(M)$ and let $L\in q\Spec_I(M)$. \begin{enumerate} \item $D(\Im(Y))=Cl(Y)$. In particular $Cl(\{L\})=D (L)$; \item Let $M$ be a semiprimitive (resp. reduced) $R$-module and $\Max(M)$ (resp. $\Spec(M)$) be a non-empty connected subspace of $q\Spec(M)$. Then $q\Spec(M)$ is connected; \item If $(\textbf{0})\in Y$, Then $Y$ is dense in $q\Spec(M)$. \item The set $\{L\}$ is closed in $ q\Spec(M)$ if and only if \begin{enumerate} \item $I$ is a maximal element in $\{(N:M) | N\in q\Spec(M)\}$, and \item $q\Spec_I(M)=\{L\}$. \end{enumerate} \item If $\{L\}$ is closed in $q\Spec(M)$, then $L$ is a maximal element of $q\Spec(M)$ and $q\Spec_I(M)=\{L\}$. \item $M$ is quasi-prime-embedding if and only if $q\Spec(M)$ is a $T_0$-space. \item $q\Spec(M)$ is a $T_1$-space if and only if $ q\Spec(M)$ is a $T_0$-space and for every element $L\in q\Spec(M)$, $(L:M)$ is a maximal element in $\{(N:M)~|~N\in q\Spec(M)\}.$ \item If $q\Spec(M)$ is a $T_1$-space, then $q\Spec(M)$ is a $T_0$-space and every quasi-prime submodule is a maximal element of $q\Spec(M)$. The converse is also true, when $M$ is finitely generated. \item Let $(\textbf{0})\in q\Spec(M)$. Then $q\Spec(M)$ is a $T_1$-space if and only if $(\textbf{0})$ is the only quasi-prime submodule of $M$. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate} \item Clearly, $Y\subseteq D(\Im(Y ))$. Next, let $D(N)$ be any closed subset of $q\Spec(M)$ containing $Y$. Then $(L : M) \supseteq (N : M)$ for every $L\in Y$ so that $(\Im(Y ) : M)\supseteq (N : M)$. Hence, for every $Q\in D(\Im(Y ))$, $(Q : M)\supseteq (\Im(Y ) : M) \supseteq (N : M)$, namely $D (\Im(Y )) \subseteq D (N)$. This proves that $D (\Im(Y ))$ is the smallest closed subset of $ q\Spec(M)$ containing $Y$, hence $D (\Im(Y )) =Cl(Y )$. \item Let $M$ be reduced. Then by (1), we have $Cl(\Spec(M))=D(\Im(\Spec(M)))=D(\textbf{0})=q\Spec(M)$. Therefore, $q\Spec(M)$ is connected by \cite[p.150, Theorem 23.4]{mk99}. A similar proof is true for semiprimitive modules. \item is clear by (1). \item Suppose that $\{L\}$ is closed. Then $\{L\}=D(L)$ by (1). Let $N\in q\Spec(M)$ such that $(L:M)\subseteq (N:M)$. Hence, $N\in D(L)= \{L\}$, and so $q\Spec_I(M)=\{L\}$, where $I=(L:M)$. On the other hand we assume that (a) and (b) hold. Let $N\in Cl(\{L\})$. Hence, $(N:M)\supseteq (L:M)$ by~(1). By (a), $(N:M)=(L:M)$. So, $L=N$ by (b). This yields $Cl(\{L\})=\{L\}$. \item Let $P\in q\Spec(M)$ such that $L\subseteq P$. Then $(L:M)\subseteq (P:M)$. i.e., $P\in D(L)=Cl(\{L\})=\{L\}$. Hence, $P=L$, and so $L$ is a maximal element of $q\Spec(M)$. \item We recall that a topological space is $T_0$ if and only if the closures of distinct points are distinct. Now, the result follows from part~(1) and Proposition~\ref{t0inj}. \item We recall that a topological space is $T_1$ if and only if every singleton subset is closed. The result follows from (4), (5) and (6). \item Trivially, $q\Spec(M)$ is a $T_0$-space and every it's singleton subset is closed. Every quasi-prime submodule is a maximal element of $q\Spec(M)$ by (5). Now, we suppose that $M$ is finitely generated. Thus, every quasi-prime submodule is maximal. Let $N\in q\Spec(M)$ such that $N\in Cl(\{L\})=D(L)$. Since $L$ is maximal, $(L:M)=(N:M)$. By Proposition~\ref{t0inj}, $N=L$. Hence, every singleton subset of $q\Spec(M)$ is closed. So, $q\Spec(M)$ is a $T_1$-space. \item Use part (8). \end{enumerate} \end{proof} \begin{eg} Consider the $\mathbb{Z}$-module $M=\bigoplus_{p} \mathbb{Z}/{p\mathbb{Z}}$, where $p$ runs through the set of all prime integers. We will show that $q\Spec(M)$ is not a $T_1$-space. Note that $(\textbf{0}:M)=\Ann(M)=(0)$. Hence, $(\textbf{0})\in q\Spec(M)$. On the other hand, for each quasi-prime ideal $I$ of $\mathbb{Z}$, we have $(IM:M)=\sqrt{I}\in q\Spec(\mathbb{Z})$. So, $q\Spec(M)$ is infinite and $q\Spec(M)$ is not a $T_1$-space by Proposition~\ref{Cl}. \end{eg} \begin{rem}\label{remmax} Let $M$ be a finitely generated (or co-semisimple) $R$-module. Since every quasi-prime submodule is contained in a maximal submodule, $q\Spec(M)$ is a $T_1$-space if and only if $q\Spec(M)$ is a $T_0$-space and $q\Spec(M)=\Max(M)$. Since $q\Spec(R)$ is always a $T_0$-space (see \cite[Theorem~4.1]{az08}), we have $q\Spec(R)$ is a $T_1$-space if and only if $q\Spec(R)=\Max(R)$. If $R$ is absolutely flat, then by \cite[Theorem~2.1]{az08}, $q\Spec(R)=\Spec(R)=\Max(R)$. Therefore, $q\Spec(R)$ is a $T_1$-space. It is clear that if $M$ is free, then $q\Spec(M)$ is a $T_1$-space if and only if $M$ is isomorphic to $R$ and $q\Spec(R)$ is a $T_1$-space. \end{rem} \begin{thm}\label{t1multi} Let $M$ be a finitely generated $R$-module. The following statements are equivalent \begin{enumerate} \item $q\Spec(M)$ is a $T_1$-space. \item $M$ is a multiplication module and $q\Spec(M)=\Max(M)$. \end{enumerate} \end{thm} \begin{proof} Use Corollary~\ref{maincor}, Remark~\ref{remmax} and Proposition \ref{Cl}(6). \end{proof} \begin{cor} Let $M$ be an $R$-module. \begin{enumerate} \item Let $R$ be an integral domain. If $q\Spec(R)$ is a $T_1$-space, then $R$ is a field. \item If $M$ is Noetherian and $q\Spec(M)$ is a $T_1$-space, then $M$ is Artinian cyclic. \end{enumerate} \end{cor} \begin{proof} (1) By Remark~\ref{remmax}, we have $q\Spec(R)=\Max(R)$. But $(0)\in q\Spec(R)$ by assumption. Hence, $R$ is a field. (2) By Theorem~\ref{t1multi}, $M$ is multiplication and every prime submodule of $M$ is maximal. By \cite[Theorem 4.9]{beh06}, $M$ is Artinian. The result follows from \cite[Corollary 2.9]{zb88}. \end{proof} A topological space $X$ is said to be \emph{irreducible} if $X \neq \emptyset$ and if every pair of non-empty open sets in $X$ intersect, or equivalently if every non-empty open set is dense in $X$. A topological space $X$ is irreducible if for any decomposition $X=A_1\cup A_2$ with closed subsets $A_i$ of $X$ with $i = 1, 2$, we have $A_1 = X$ or $A_2 = X$. A subset $Y$ of $X$ is irreducible if it is irreducible as a subspace of $X$. An irreducible component of a topological space $A$ is a maximal irreducible subset of $X$. Both of a singleton subset and its closure in $q\Spec(M)$ are irreducible. Now, applying (1) of Proposition \ref{Cl2}, we obtain that \begin{cor}\label{cor4} $D(L)$ is an irreducible closed subset of $ q\Spec(M)$ for every quasi-prime submodule $L$ of $M$. \end{cor} \begin{thm}\label{irrsub} Let $M$ be an $R$-module and $Y \subseteq q\Spec(M)$. Then $\Im(Y)$ is a quasi-prime submodule of $M$ if and only if $Y$ is an irreducible space. \end{thm} \begin{proof} Let $\Im(Y)$ be a quasi-prime submodule of $M$. Let $Y\subseteq Y_1\cup Y_2$ where $Y_1$ and $Y_2$ are two closed subsets of $X$. Then there are submodules $N$ and $L$ of $M$ such that $Y_1=D(N)$ and $Y_2=D(L)$. Hence, $Y\subseteq D(N)\cup D(L)=D(N\cap L)$. By Lemma~\ref{lemtop}, $((N\cap L):M)\subseteq (\Im(Y):M)$. Since $(\Im(Y):M)$ is a quasi-prime ideal, either $(N:M)\subseteq (\Im(Y):M)$ or $(L:M)\subseteq (\Im(Y):M)$. By Lemma~\ref{lemtop}, either $Y\subseteq D(N)=Y_1$ or $Y\subseteq D(L)=Y_2$. This yields $Y$ is irreducible. Assume that $Y$ is an irreducible space. Let $I$ and $J$ be two ideals of $R$ such that $I\cap J \subseteq (\Im(Y):M)$. Suppose for contradiction that $I\not \subseteq (\Im(Y):M)$ and $J\not\subseteq (\Im(Y):M)$. Then $(IM:M)\not \subseteq (\Im(Y):M)$ and $(JM:M)\not \subseteq (\Im(Y):M)$. By Lemma~\ref{lemtop}, $Y\not \subseteq D(IM)$, $Y\not \subseteq D(JM)$. Let $P\in Y$. Then $(P:M)\supseteq (\Im(Y):M)\supseteq I\cap J$. This means that either $IM\subseteq (P:M)M$ or $JM\subseteq (P:M)M$. So, by Lemma~\ref{lemtop}, either $D(P)\subseteq D(IM)$ or $D(P)\subseteq D(JM)$. Therefore, $Y\subseteq D(IM)\cup D(JM)$ which is a contradiction to irreducibility of $Y$. \end{proof} \begin{eg} Consider $M=\mathbb{Z}/p\mathbb{Z}\oplus \mathbb{Z}$ as a $\mathbb{Z}$-module, where $p$ is a prime integer. It is easy to see that $L=\mathbb{Z}/p\mathbb{Z}\oplus (0)$ and $N=(\bar{0})\oplus p\mathbb{Z}$ are prime submodules of $M$. We have $\Im(q\Spec(M))\subseteq L\cap N=(\textbf{0})$. Hence, $(\Im(q\Spec(M)):M)=((0):M)=(0)$ is a quasi-prime ideal of $\mathbb{Z}$. This implies that $\Im(q\Spec(M))$ is a quasi-prime submodule of $M$. By Theorem \ref{irrsub}, $q\Spec(M)$ is an irreducible space. \end{eg} \begin{cor} Let $M$ be an $R$-module and $N\leq M$. \begin{enumerate} \item $V^M(N)$ is irreducible if and only if $\rad(N)$ is a quasi-prime submodule.\label{p3} \item If $N$ is a $\p$-primary submodule of $M$ where $\p\in \Max(R)$, then $V^M(N)$ is irreducible. \item Let $R$ be a quasi-local ring. Then $\Max(M)$ is irreducible. \item The quasi-prime spectrum of every faithful reduced module over an integral domain is irreducible. \end{enumerate} \end{cor} \begin{proof} (1) Since $\rad(N)=\Im(V^M(N))$, result follows immediately from Theorem~\ref{irrsub}. (2) Use part~(\ref{p3}) and \cite[Corollary 5.7]{lu03}. (3) Let $\m$ be the unique maximal ideal of $R$. By \cite[p.63, Proposition 4]{lu84}, $(H:M)=\m$ for each $H\in \Max(M)$. By Lemma~\ref{lemaux}(2), $\bigcap_{H\in \Max(M)} H=\Im(\Max(M))$ is a quasi-prime submodule. By Theorem \ref{irrsub}, $\Max(M)$ is irreducible. (4) Since $M$ is reduced, $(\Im(q\Spec(M)):M)\subseteq(\Im(\Spec(M)):M)=(\bigcap_{P\in \Spec(M)} P:M)=((\textbf{0}):M)=(0)\in \Spec(R)$. The result follows from Theorem \ref{irrsub}. \end{proof} \begin{eg}\label{egs} \begin{enumerate} \item Let $M=\mathbb{Z} \oplus \mathbb{Z}(p^{\infty})$ be a $\mathbb{Z}$-module. Then by Theorem~\ref{irrsub}, $\Spec(M)$ is an irreducible space because $\Im(\Spec(M))=(0) \oplus \mathbb{Z}(p^{\infty})$ is a prime submodule of $M$. \item Let $M=\mathbb{Q} \oplus \mathbb{Z}/p\mathbb{Z}$ be a $\mathbb{Z}$-module. By Theorem \ref{irrsub}, $\Max(M)$ is an irreducible subset of $ q\Spec(M)$ because $\Rad(M)=\mathbb{Q} \oplus (0)$. \end{enumerate} \end{eg} \begin{cor}\label{zero} Let $M$ be an $R$-module such that $(\textbf{0})\in q\Spec(M)$. Then $q\Spec(M)$ is an irreducible space. In particular, if $R$ is an integral domain and $M$ is a torsion-free $R$-module, then $q\Spec(M)$ is an irreducible space. Moreover, $q\Spec(R)$ is an irreducible space, if $R$ is an integral domain. \end{cor} \begin{proof} Use Theorem~\ref{irrsub} and \cite[Lemma~4.5]{lu03}. \end{proof} \begin{eg} Consider the faithful $\mathbb{Z}$-module $M=\bigoplus_{p} \mathbb{Z}/{p\mathbb{Z}}$, where $p$ runs through the set of all prime integers. Then by Corollary \ref{zero}, $\Spec(M)$ is an irreducible space. \end{eg} Let $Y$ be a closed subset of a topological space. An element $y\in Y$ is called a \emph{generic point} of $Y$ if $Y=Cl(\{y\})$. In Proposition \ref{Cl2} (1), we have seen that every element $L$ of $ q\Spec(M)$ is a generic point of the irreducible closed subset $D(L)$ of $ q\Spec(M)$. Note that a generic point of a closed subset $Y$ of a topological space is unique if the topological space is a $T_{0}$-space. \begin{thm}\label{irclsub}\label{cores}\label{irrmin} Let $M$ be an $R$-module and $Y \subseteq q\Spec(M)$. \begin{enumerate} \item Then $Y$ is an irreducible closed subset of $ q\Spec(M)$ if and only if $Y = D^M(L)$ for some $L \in q\Spec(M)$. Thus every irreducible closed subset of $ q\Spec(M)$ has a generic point. \item If $M$ is quasi-prime-embedding, then the correspondence $D^M(L)\mapsto L$ is a bijection of the set of irreducible components of $ q\Spec(M)$ onto the set of minimal elements of $ q\Spec(M)$ with respect to inclusion. \item Let $M$ be a quasi-primeful $R$-module. Then the set of all irreducible components of $ q\Spec(M)$ is of the form $$T=\{ D^M(IM) \mid \text{$I$ is a minimal element of $D^R(\Ann(M))$ w.r.t inclusion} \}.$$ \item Let $R$ be an arithmetical Laskerian ring and $M$ be a nonzero quasi-primeful $R$-module. Then $q\Spec(M)$ has finitely many irreducible components. \end{enumerate} \end{thm} \begin{proof} \begin{enumerate} \item It is clear $Y=D(L)$ is an irreducible closed subset of $ q\Spec(M)$ for any $L\in q\Spec(M)$ by Corollary~\ref{cor4}. Conversely, if $Y$ is an irreducible closed subset of $ q\Spec(M)$, then $Y=D(N)$ for some $ N\leq M$ and $L:=\Im (Y)=\Im(D(N))\in q\Spec(M)$ by Theorem~\ref{irrsub}. Hence, $Y=D(N)=D(\Im(D(N)))=D(L)$ as desired. \item Let $Y$ be an irreducible component of $ q\Spec(M)$. Since each irreducible component of $ q\Spec(M)$ is a maximal element of the set $\{D(N) \mid N\in q\Spec(M)\}$ by (1), we have $Y=D(L)$ for some $L\in q\Spec(M)$. Obviously, $L$ is a minimal element of $ q\Spec(M)$, for if $T\in q\Spec(M)$ with $T\subseteq L$, then $D(L)\subseteq D(T)$. So $L=T$ due to the maximality of $D(L)$ and Proposition~\ref{t0inj}. Let $L$ be a minimal element of $ q\Spec(M)$ with $D(L)\subseteq D(N)$ for some $N\in q\Spec(M)$. Then $L\in D(N)$ whence $(N:M)M\subseteq L$. By Lemma~\ref{lemaux}, $(N:M)M$ belongs to $q\Spec(M)$. Hence, $L=(N:M)M$ due to the minimality of $L$. By Lemma~\ref{lemtop}, $D(N)=D((N:M)M)=D(L)$. This implies that $D(L)$ is an irreducible component of $ q\Spec(M)$, as desired. \item Let $Y$ be an irreducible component of $ q\Spec(M)$. By part~(1), $Y=D^M(L)$ for some $L\in q\Spec(M)$. Hence, $Y=D^M(L)=D^M((L:M)M)$ by Lemma~\ref{lemtop}. So, we have $l:=(L:M)\in D^R(\Ann(M))$. We must show that $l$ is a minimal element of $D^R(\Ann(M))$ w.r.t inclusion. To see this let $q\in D^R(\Ann(M))$ and $q\subseteq l$. Then $q/\Ann(M)\in q\Spec(R/\Ann(M))$, and there exists an element $Q\in q\Spec(M)$ such that $(Q:M)=q$ because $M$ is quasi-primeful. So, $Y=D^M(L)\subseteq D^M(Q)$. Hence, $Y=D^M(L)=D^M(Q)$ due to the maximality of $D^M(L)$. By Proposition~\ref{Cl2}, we have that $l=q$. Conversely, let $Y\in T$. Then there exists a minimal element $I$ in $D^R(\Ann(M))$ such that $Y=D^M(IM)$. Since $M$ is quasi-primeful, there exists an element $N\in q\Spec(M)$ such that $(N:M)=I$. So, $Y=D^M(IM)=D^M((N:M)M)=D^M(N)$, and so $Y$ is irreducible by part~(1). Suppose that $Y=D^M(N)\subseteq D^M(Q)$, where $Q$ is an element of $ q\Spec(M)$. Since $N\in D^M(Q)$ and $I$ is minimal, it follows that $(N:M)=(Q:M)$. Now, by Lemma~\ref{lemtop}, $$Y=D^M(N)=D^M((N:M)M)=D^M((Q:M)M)=D^M(Q).$$ \item By assumption, the set of quasi-prime ideals are exactly the set of primary ideals (see Remark \ref{6}). If $I$ is a minimal element of $D^R(\Ann(M))$ and $\Ann(M) = \cap_{i=1}^n Q_i$ is a minimal primary decomposition of $\Ann(M)$, then $Q_i \subseteq I$ for some $1\leq i \leq n$(Since $I$ is quasi-prime and $\cap_{i=1}^n Q_i\subseteq I$). By minimality of $I$, we get $I = Q_i$. Therefore, irreducible components of $q\Spec(M)$ are the form $D^M(Q_iM)$, by part (3). \end{enumerate} \end{proof} We introduce a base for the developed Zariski topology on $q\Spec(M)$ for any $R$-module $M$. For each $a\in R$, we define $\Gamma_M(a)= q\Spec(M)-D(aM)$. Then every $\Gamma_M(a)$ is an open set of $ q\Spec(M)$, $\Gamma_M(0) =\emptyset$, and $\Gamma_M(1)=q\Spec(M)$. \begin{prop}\label{base} For any $R$-module $M$, the set $B =\{ \Gamma_M(a)\mid a\in R\}$ forms a base for the developed Zariski topology on $ q\Spec(M)$. \end{prop} \begin{proof} We may assume that $ q\Spec(M)\neq\emptyset$. Let $U$ be any open subset in $ q\Spec(M)$. There exists a submodule $N$ of $M$ such that \begin{eqnarray*} U &=& q\Spec(M)-D(N)=q\Spec(M)-D((N:M)M) \\ &=& q\Spec(M)-D(\sum_{a_i\in(N:M)}a_iM) \\ &=& q\Spec(M)-D(\sum_{a_i\in(N:M)}(a_iM:M)M) \\ &=& q\Spec(M)-\bigcap_{a_i\in(N:M)} D(a_iM) \\ &=& \bigcup_{a_i\in(N:M)} \Gamma_M(a_i). \end{eqnarray*} \end{proof} \begin{prop}\label{XD} Let $M$ be an $R$-module, $a\in R$ and $\psi : q\Spec(M)\rightarrow q\Spec(R/\Ann(M))$ be the natural map of $ q\Spec(M)$. \begin{enumerate} \item $\psi^{-1}(\Gamma_{\bar{R}}(\bar{a}))=\Gamma_M(a)$; \item $\psi(\Gamma_M(a))\subseteq \Gamma_{\bar{R}}(\bar{a})$. If $M$ is quasi-primeful, then $\psi(\Gamma_M(a))= \Gamma_{\bar{R}}(\bar{a})$; \item If $M$ is quasi-primeful, then $q\Spec(M)$ is a compact space. \item If $M$ is finitely generated multiplication, then $q\Spec(M)$ is compact. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate} \item By Proposition \ref{cont}, we have \begin{eqnarray*} \psi^{-1}(\Gamma_{\bar{R}}(\bar{a})) &=& \psi^{-1}(q\Spec(\bar{R})-D(\bar{a}\bar{R})) \\ &=& q\Spec(M)- \psi^{-1}(D(\bar{a}\bar{R})) \\ &=& q\Spec(M)-D(aM)=\Gamma_M(a). \end{eqnarray*} \item This follows from (1). \item By Proposition~\ref{base}, the set $B =\{ \Gamma_M(a)\mid a\in R\}$ is a base for the developed Zariski topology on $q\Spec(M)$. For any open cover of $q\Spec(M)$, there is a family $\{a_{\lambda}\in R | \lambda \in \Lambda\}$ of elements of $R$ such that $q\Spec(M)=\bigcup_{\lambda\in\Lambda} \Gamma_M(a_\lambda)$ and for each $\lambda \in \Lambda$, there is an open set in the covering containing $\Gamma_M(a_\lambda)$. By part (2), \begin{eqnarray*} q\Spec(\bar{R}) &=& \Gamma_{\bar{R}}(\bar{1}) \\ &=& \psi(\Gamma_M(1)) \\ &=& \psi(q\Spec(M))\subseteq \bigcup_{\lambda\in\Lambda} \psi(\Gamma_M(a_\lambda))\\ &=& \bigcup_{\lambda\in\Lambda} \Gamma_{\bar{R}}(\bar{a}_\lambda). \end{eqnarray*} By \cite[Theorem 4.1]{az08}, $q\Spec(\bar{R})$ is compact, hence there exists a finite subset $\Lambda '$ of $\Lambda$ such that $q\Spec(\bar{R})\subseteq \bigcup_{\lambda\in\Lambda '} \Gamma_{\bar{R}}(\bar{a}_\lambda)$. By part (1), \begin{eqnarray*} q\Spec(M) &=& \Gamma_M(1) \\ &=& \psi^{-1}(\Gamma_{\bar{R}}(\bar{1}))\\ &=& \psi^{-1}(q\Spec(\bar{R}))\subseteq \bigcup_{\lambda\in\Lambda '} \psi^{-1}(\Gamma_{\bar{R}}(\bar{a}_\lambda))\\ &=&\bigcup_{\lambda\in\Lambda '} \Gamma_M(a_\lambda). \end{eqnarray*} \item Let $\{D^M(N_{\lambda})\}_{\lambda \in \Lambda}$ be an arbitrary family of closed subsets of $ q\Spec(M)$, where $N_{\lambda}\leq M$ for each $\lambda \in \Lambda$ such that $\bigcap_{\lambda \in \Lambda} D^M(N_{\lambda})=\emptyset$. Hence, we have $D^M(\sum_{\lambda\in \Lambda}(N_{\lambda} : M)M)=\emptyset$. Since $M$ is multiplication, $\Omega^M(\sum_{\lambda\in \Lambda}(N_{\lambda} : M)M)=\emptyset$, so $M=\sum_{\lambda\in \Lambda}(N_{\lambda} : M)M$. Since $M$ is finitely generated, there exists a finite subset $\Lambda '$ of $\Lambda$ such that $M=\sum_{\lambda\in \Lambda'}(N_{\lambda} : M)M$. This completes the proof. \end{enumerate} \end{proof} A topological space $X$ is said to be \emph{Noetherian} if the open subsets of $X$ satisfy the ascending chain condition. Since closed subsets are complements of open subsets, it comes to the same thing to say that the closed subsets of $X$ satisfy the descending chain condition. \begin{thm}\label{thm2.1}\label{accring} Let $M$ ba an $R$-module. \begin{enumerate} \item If $M$ satisfies $ACC$ on quasi-semiprime submodules, then $ q\Spec(M)$ is a Noetherian topological space. In particular, quasi-prime spectrum of every Noetherian module is a Noetherian topological space (see \cite[Theorem~4.2]{az08}). \item If for every submodule $N$ of $M$ there exists a finitely generated submodule $L$ of $N$ such that $\Im(\Omega^M(N))=\Im(\Omega^M(L))$, then $ q\Spec(M)$ is a Noetherian topological space. \item If $R$ satisfies $ACC$ on quasi-semiprime ideals, then $ q\Spec(M)$ is a Noetherian topological space. In particular, for every module $M$ over a Noetherian ring, $ q\Spec(M)$ is a Noetherian topological space. \end{enumerate} \end{thm} \begin{proof} \begin{enumerate} \item Let $D(N_{1})\supseteq D(N_{2})\supseteq \cdots $ be a descending chain of closed subsets of $ q\Spec(M)$. We have an ascending chain of quasi-semiprime submodules of $M$, $\Im(D(N_{1}))\subseteq \Im(D(N_{2}))\subseteq \cdots $ which is stationary by assumption. So, there exists a positive integer $k$ such that $\Im(D(N_{k}))= \Im(D(N_{k+i}))$, for each $i=1,2, \ldots$ \,. By Proposition \ref{Cl}, $D(N_{k})= D(N_{k+i})$, and so $ q\Spec(M)$ is a Noetherian topological space. \item Let $N_1\subseteq N_2\subseteq N_3\subseteq \cdots$ be an ascending chain of quasi-semiprime submodules of $M$, and let $N=\cup_iN_i$. By assumption, there exists a finitely generated submodule $L$ of $N$ such that $\bigcap_{P\in \Omega^M(N)}P =\bigcap_{Q\in \Omega^M(L)}Q$. Hence there exists a positive integer $n$ such that $L\subseteq N_n$. Then $$\bigcap_{P\in \Omega^M(N)}P =\bigcap_{Q\in \Omega^M(L)}Q\subseteq N_n \subseteq N \subseteq \bigcap_{P\in \Omega^M(N)}P,$$ so that $N_n = N_{n+1} = N_{n+2} =\cdots$. Hence, $M$ satisfies $ACC$ on quasi-semiprime submodules. By (1), $q\Spec(M)$ is a Noetherian topological space. \item Let $D(N_{1})\supseteq D(N_{2})\supseteq \cdots $ be a descending chain of closed subsets of $ q\Spec(M)$. By assumption, there exists a positive integer $k$ such that $(\Im(D(N_{k})):M)M= (\Im(D(N_{k+i})):M)M$, for each $i=1,2, \ldots$ \,. By Lemma~\ref{lemtop}, $D(\Im(D(N_{k})))= D(\Im(D(N_{k+i})))$. By Proposition~\ref{Cl}, $D(N_{k})= D(N_{k+i})$, and so $ q\Spec(M)$ is a Noetherian space. \end{enumerate} \end{proof} \begin{rem}\label{remnoecom} Let $X$ be a Noetherian topological space. Then every subspace of $X$ is compact. In particular, $X$ is compact (see \cite[p. 79, Ex. 5]{ati69}). \end{rem} As a consequence of Remark \ref{remnoecom}, we have \begin{cor}\label{cor2.5} For an $R$-module $M$, $ q\Spec(M)$ is a compact space in each of the following cases. \begin{enumerate} \item $M$ satisfies $ACC$ on quasi-semiprime submodules; \item $R$ satisfies $ACC$ on quasi-semiprime ideals. \end{enumerate} \end{cor} For example, quasi-prime spectrum of every $\mathbb{Z}$-module is compact space. \begin{prop}\label{fin}\label{finimini} Let $M$ be a quasi-prime-embedding\, $R$-module. If $ q\Spec(M)$ is a Noetherian space, then \begin{enumerate} \item Every ascending chain of quasi-prime submodules of $M$ is stationary; \item $ q\Spec(M)$ has finitely many minimal element. In particular, every multiplication module over a Noetherian ring has finitely many minimal quasi-prime submodules. \end{enumerate} \end{prop} \begin{proof} (1) Let $N_1\subseteq N_2 \subseteq \cdots$ be an ascending chain of quasi-prime submodules of $M$. Then $D(N_1)\supseteq D(N_2) \supseteq \cdots$ is a descending chain of closed subsets of $ q\Spec(M)$, which is stationary by assumption. There exists an integer $k\in \mathbb{N}$ such that $D(N_k)=D(N_{k+i})$ for each $i\in \mathbb{N}$. By Proposition~\ref{t0inj}, we have $N_k=N_{k+i}$ for each $i\in \mathbb{N}$. This completes the proof. (2) Since every Noetherian topological space has finitely many irreducible components, the result follows from Theorem \ref{cores}(2). For last statement, use Corollary~\ref{maincor} and Theorem~\ref{accring}. \end{proof} We recall that if $X$ is a finite space, then $X$ is a $T_1$ if and only if $X$ is the discrete space. We also recall that a topological space is called Hausdorff if any two distinct points possess disjoint neighborhoods. So, we have the following corollary. \begin{cor} Let $R$ be a Noetherian ring and $M$ be a finitely generated $R$-module. Then the following statements are equivalent: \begin{enumerate} \item $q\Spec(M)$ is a Hausdorff space; \item $q\Spec(M)$ is a $T_1$-space; \item $q\Spec(M)$ is a discrete space; \item $M$ is a multiplication module and $q\Spec(M)=\Max(M)$. \end{enumerate} \end{cor} \begin{proof} $(1)\Rightarrow(2)$ and $(3)\Rightarrow(1)$ are clear. $(2)\Leftrightarrow(4)$ follows form Theorem~\ref{t1multi}. $(2)\Rightarrow(3)$ By Proposition~\ref{finimini}, $M$ has finitely many minimal quasi-prime submodules. By Theorem~\ref{t1multi}, $q\Spec(M)$ is finite. Therefore, $q\Spec(M)$ is a discrete space. \end{proof} \section*{ACKNOWLEDGMENTS} The authors specially thank the referee for the useful suggestions and comments. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2] \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,344
Q: Caching list of id's of django queryset, and reusing that list for another Viewset Let's use these 4 simple models for example. A city can have multiple shops, and a shop can have multiple products, and a product can have multiple images. models.py class City(models.Model): name=models.CharField(max_length=300) class Shop(models.Model): name = models.CharField(max_length=300) city = models.ForeignKey(City, related_name='related_city', on_delete=models.CASCADE) class Product(models.Model): name=models.CharField(max_length=300) description=models.CharField(max_length=5000) shop=models.ForeignKey(Shop, related_name='related_shop', on_delete=models.CASCADE) class Image(models.Model): image=models.ImageField(null=True) product=models.ForeignKey(Product, related_name='related_product', on_delete=models.CASCADE) For an eCommerce website, users will be writing keywords and I filter on the products names, to get the matching results. I also want to fetch together the related data of shops, cities and images, relevant to the products I will be showing. To achieve that I am using .select_related() to retrieve the other objects from the foreignKeys as well. Now, my question is, what is the best way to send that to the client? One way is to make a single serializer, that groups all data from all 4 tables in a single JSON. That JSON will look like 1NF, since it will have many repetitions, for example, there will be new row for every image, and every shop that the product can be found, and the 10.000 character long description will be repeated for each row, so this is not such a good idea. More specifically the fields are: (product_id, product_name, product_description, product_image_filepath, product_in_shop_id, shop_in_city_id) The second approach will use Django queryset caching, which I have no experience at all, and maybe you can give me advice on how to make it efficient. The second way would be to get Product.objects.filter(by input keywords).select_related().all(), cache this list of id's of products, and return this queryset of the viewset. Then, from the client's side, I make another GET request, just for the images, and I don't know how to reuse the list of id's of products, that I queried earlier, in the previous viewset / queryset. How do I fetch only the images that I need, for products that matched the user input keywords, such that I don't need to query the id's of products again? How will the code look like exactly, for caching this in one viewset, and reusing it again in another viewset? Then I also need one more GET request to get the list of shops and cities, where the product is available, so it will be fantastic if I can reuse the list of id's I got from the first queryset to fetch the products. Is the second approach a good idea? If yes, how to implement it exactly? Or should I stick with the first approach, which I am sceptical is the right way to do this. I am using PostgreSQL database, and I will probably end up using ElasticSearch for my search engine, to quickly find the matching keywords.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,183
{"url":"http:\/\/cms.math.ca\/cjm\/kw\/contraction","text":"location:\u00a0 Publications \u2192 journals\nSearch results\n\nSearch: All articles in the CJM digital archive with keyword contraction\n\n Expand all \u00a0\u00a0\u00a0\u00a0 \u00a0 Collapse all Results 1 - 2 of 2\n\n1. CJM 2013 (vol 67 pp. 132)\n\nClou\u00e2tre, Rapha\u00ebl\n Unitary Equivalence and Similarity to Jordan Models for Weak Contractions of Class $C_0$ We obtain results on the unitary equivalence of weak contractions of class $C_0$ to their Jordan models under an assumption on their commutants. In particular, our work addresses the case of arbitrary finite multiplicity. The main tool is the theory of boundary representations due to Arveson. We also generalize and improve previously known results concerning unitary equivalence and similarity to Jordan models when the minimal function is a Blaschke product. Keywords:weak contractions, operators of class $C_0$, Jordan model, unitary equivalenceCategories:47A45, 47L55\n\n2. CJM 2006 (vol 58 pp. 1291)\n\nWeimar-Woods, Evelyn\n The General Structure of $G$-Graded Contractions of Lie Algebras I. The Classification We give the general structure of complex (resp., real) $G$-graded contractions of Lie algebras where $G$ is an arbitrary finite Abelian group. For this purpose, we introduce a number of concepts, such as pseudobasis, higher-order identities, and sign invariants. We characterize the equivalence classes of $G$-graded contractions by showing that our set of invariants (support, higher-order identities, and sign invariants) is complete, which yields a classification. Keywords:Lie algebras, graded contractionsCategories:17B05, 17B70\n top of page | contact us | privacy | site map |","date":"2016-12-08 10:00:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6200373768806458, \"perplexity\": 1997.0727736366503}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-50\/segments\/1480698542520.47\/warc\/CC-MAIN-20161202170902-00445-ip-10-31-129-80.ec2.internal.warc.gz\"}"}
null
null
The early childhood programme of Reggio Emilia in Italy is acclaimed as one of the best education systems in the world and this book offers the unique insight of Carlina Rinaldi, the former director of the municipal early childhood centres in Reggio Emilia and successor to Loris Malaguzzi, one of the twentieth century's leading pedagogical thinkers. Rinaldi has an enviable international reputation for her contribution to the Reggio approach and has given talks on the topic around the world. A collection of Rinaldi's most important works, this book is organized thematically with a full introduction contextualising each piece. It closes with an interview by series editors Peter Moss and Gunilla Dahlberg, looking at Rinaldi's current work and reflections on Reggio's past, present and future. Much of this material is previously unpublished and focuses on a number of questions:What were the ideas and legacy of Loris Malaguzzi?What is unique about Reggio Emilia?What are the issues in education today and what does it mean to be a teacher?How can educators most effectively make use of creativity?
{ "redpajama_set_name": "RedPajamaC4" }
8,336
\section{Introduction} Keyword spotting (KWS), or spoken term detection (STD), is a task to detect pre-defined keywords in a stream of audio. Specifically, as a typical application of KWS, wake-up word detection has become an indispensable function on various devices, in order to enable users to have a fully hands-free experience. A practical on-device KWS module must minimize the false rejection rate at a low false alarm rate to make it easy to use, while limiting the memory footprint, latency and computational cost as small as possible. As a classic solution, large vocabulary continuous speech recognition (LVCSR) based systems~\cite{motlicek2012improving, can2011lattice} are widely used in the KWS task. Although it is flexible to change keywords according to user's requirement, the LVCSR based systems need to generate rich lattices and high computational resources are required for keyword search. These systems are often designed to search large databases of audio content. Several recent attempts have been proposed to reduce the computational cost, e.g., using end-to-end based acoustic models~\cite{bai2016end, Rosenberg2017End}. But these models are still quite large, making them not suitable for small-footprint, low-latency applications. Another classic technique for KWS is the keyword/filler hidden Markov model (HMM) approach~\cite{rohlicek1989continuous}, which remains strongly competitive until today. HMMs are trained for both keyword and non-keyword audio segments, respectively. At runtime, Viterbi decoding is used to search the best path in the decoding graph, which can be computationally expensive depending on the HMM topology. In these approaches, Gaussian mixture models (GMMs) were originally used to model the observed acoustic features, but with the advances in deep learning, deep neural networks (DNNs) have been recently adopted to substitute GMMs~\cite{Sz2005Comparison} with improved performances. Some studies replaced HMM by an RNN model trained with connectionist temporal classification (CTC) criterion~\cite{hwang2015online} or by an attention-based model~\cite{he2017streaming}, however, these studies are still under the keyword/filler framework. As a small footprint approach used by Google, Deep KWS~\cite{chen2014small} has drawn much attention recently. In this approach, a simple DNN is trained to predict the frame-level posteriors of sub keyword targets and fillers. When a confidence score, produced by a posterior handing method, exceeds a threshold, a keyword is detected. With no HMM involved, this approach has shown to outperform a keyword/filler HMM approach. In addition, this approach is highly attractive to run on the device with small footprint and low latency, as the size of the DNN can be easily controlled and no graph-searching is involved. Later, feed-forward DNNs were substituted by more powerful networks like convolutional neural networks (CNNs)~\cite{sainath2015convolutional} and recurrent neural networks (RNNs)~\cite{sun2016max}, with expected improvements. It should be noted that, although the framework of Deep KWS is quite simple, it still needs a well-trained acoustic model to obtain frame-level alignments. In this paper, we aim to further simplify the pipelines of building a production-quality KWS. Specifically, we propose an attention-based end-to-end neural model for small-footprint keyword spotting. By saying \textit{end-to-end}, we mean that: (1) a simple model that directly outputs keyword detection; (2) no complicated searching involved; (3) no alignments needed beforehand to train the model. Our work is inspired by the recent success of attention models used in speech recognition~\cite{bahdanau2016end, chan2016listen, shanattention}, machine translation~\cite{bahdanau2014neural}, text summarization~\cite{rush2015neural} and speaker verification~\cite{chowdhury2017attention}. It is intuitive to use attention mechanism in KWS: humans are able to focus on a certain region of an audio stream with ``high resolution" (e.g., the listener's name) while perceiving the surrounding audio in ``low resolution", and then adjusting the focal point over time. Our end-to-end KWS model consists of an \textit{encoder} and an \textit{attention} mechanism. The encoder transforms the input signal into a high level representation using RNNs. Then the attention mechanism weights the encoder features and generates a fixed-length vector. Finally, by linear transformation and softmax function, the vector becomes a score used for keyword detection. In terms of end-to-end and small-footprint, the closest approach to ours is the one proposed by Kliegl \textit{et al.}~\cite{arik2017convolutional}, where a convolutional recurrent neural network (CRNN) architecture is used. However, the latency introduced by its long decoding window ($T$=1.5 secs) makes the system difficult to use in real applications. To improve our end-to-end approach, we further explore the encoder architectures, including LSTM~\cite{hochreiter1997long}, GRU~\cite{cho2014learning} and CRNN that is inspired by~\cite{arik2017convolutional}. Experiments on real-world wake-up data show that our approach outperforms Deep KWS by a large margin. GRU is preferred over LSTM and the best performance is achieved by CRNN. To be more specific, with only $\sim$84K parameters, the CRNN-based attention model achieves 1.02\% false rejection rate (FRR) at 1.0 false alarm (FA) per hour. \section{Attention-based KWS} \subsection{End-to-end architecture} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{end2end.pdf} \caption{Attention-based end-to-end model for KWS.}\vspace{-10pt} \label{fig:end2end} \end{figure} We propose to use attention-based end-to-end model in small-footprint keyword spotting. As depicted in Fig.~\ref{fig:end2end}, the end-to-end architecture consists of two major sub-modules: the encoder and the attention mechanism. The encoder results in a higher-level feature representation $\mathbf{h} = (h_1,...,h_T)$ from the input speech features $\mathbf{x} = (x_1,...,x_T)$: \begin{flalign} \mathbf{h}&=Encoder(\mathbf{x}). \label{eq1} \end{flalign} Specifically, the $Encoder$ is usually a RNN that can directly make use of speech contextual information. In our work, we explore different encoder structures, including GRU, LSTM and CRNN. The attention mechanism learns normalized weights $\alpha_t \in [0,1]$ from the feature representation: \begin{flalign} \alpha_{t}&=Attend(\bm{h}_t). \label{eq2} \end{flalign} Then we form fixed-length vector $c$ as the weighted average of the $Encoder$ outputs $\mathbf{h}$: \begin{flalign} \bm{c}&=\sum_{t=1}^T\alpha_{t}\bm{h}_t. \label{eq3} \end{flalign} Finally, we generate a probability distribution by a linear transformation and the softmax function: \begin{flalign} p(y)&=softmax(\bm{U}\bm{c}). \label{eq4} \end{flalign} where $\mathbf{U}$ is the linear transform, $y$ indicate whether a keyword detected. \subsection{Attention mechanism} Similar to human listening attention, the attention mechanism in our model selects the speech parts which are more likely to contain the keyword while ignoring the unrelated parts. We investigate both average attention and soft attention. \textbf{Average attention}: The $Attend$ model does not have trainable parameters and the $\alpha_t$ is set as the average of $T$: \begin{flalign} \alpha_{t}&=\frac{1}{T}. \label{eq5} \end{flalign} \textbf{Soft attention}: This attention method is borrowed from speaker verification~\cite{chowdhury2017attention}. Compared with other attention layers, the shared-parameter non-linear attention is proven to be effective~\cite{chowdhury2017attention}. We first learn a scalar score $e_t$: \begin{flalign} e_{t}&=v^Ttanh(\bm{W}h_t+\bm{b}). \label{eq6} \end{flalign} Then we compute the normalized weight $\alpha_t$ using these scalar scores: \begin{flalign} \alpha_t&=\frac{exp(e_t)}{\sum_{j=1}^Texp(e_j)}. \label{eq7} \end{flalign} \subsection{Decoding} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{decoding.pdf} \caption{Sliding windows used in decoding.}\vspace{-10pt} \label{fig:decoding} \end{figure} As shown in Fig.~\ref{fig:end2end}, unlike some other approaches~\cite{chen2014small}, our end-to-end system outputs a confidence score directly without post-processing. Similar to the Deep KWS system, our system is triggered when the $p(y=1)$ exceeds a preset threshold. During decoding, in Fig.~\ref{fig:decoding}, the input is a sliding window of speech features, which has a preset length and contains the entire keyword. Meanwhile, a frame shift is employed. The small set of parameters in our system leads to small-footprint memory use. For a sliding window, we only need to feed one frame into the network for computation and the rest frames have been already computed in the previous sliding window. Therefore, our system has a low computational cost. \section{Experiments} \begin{table}[t] \caption{\label{tab:attention} {\it Performance comparison between Deep KWS and attention-based models with 2-64 network. FRR is at 1.0 false alarm (FA) per hour.}} \vspace{2mm} \footnotesize \centerline{ \begin{tabular}{ l c c } \hline \textbf{Model} & \textbf{FRR (\%)} & \textbf{Params (K)} \\ \hline \hline DNN KWS & $13.9$ & $62.5$ \\ \hline LSTM KWS & $7.10$ & $54.1$ \\ LSTM average attention & $4.43$ & $60.0$ \\ LSTM soft attention & $3.58$ & $64.3$ \\ \hline GRU KWS & $6.38$ & $44.8$ \\ GRU average attention & $3.22$ & $49.2$ \\ GRU soft attention & $\textbf{1.93}$ & $53.4$ \\ \hline \end{tabular} }\vspace{-10pt} \end{table} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{attention.jpg} \caption{ROCs for Deep KWS vs. Attention-based system with 2-64 network.}\vspace{-10pt} \label{fig:attention} \end{figure} \subsection{Datasets} We evaluated the proposed approach using real-world wake-up data collected from Mi AI Speaker\footnote{https://www.mi.com/aispeaker/}. The wake-up word is a four-syllable Mandarin Chinese term (``xiao-ai-tong-xue"). We collected $\sim$188.9K positive examples ($\sim$99.8h) and $\sim$1007.4K negative examples ($\sim$1581.8h) as the training set. The held-out validation set has $\sim$9.9K positive examples and $\sim$53.0K negative examples. The test data set has $\sim$28.8K positive examples ($\sim$15.2h) and $\sim$32.8K negative examples ($\sim$37h). Each audio frame was computed based on a 40-channel Mel-filterbank with 25ms windowing and 10ms frame shift. Then the filterbank feature was converted to per-channel energy normalized (PCEN)~\cite{wang2017trainable} Mel-spectrograms. \subsection{Baseline} We reimplemented the Deep KWS system~\cite{chen2014small} as the baseline, in which the network predicts the posteriors for the four Chinese syllables in the wake-up word and a filler. The ``filler'' here means any voice that is not contain the keyword. Specifically, we adopted three different networks, including DNN, LSTM and GRU. For a fare comparison, the network configuration was set to have similar size of parameters with the proposed attention models. The feed-forward DNN model had 3 hidden layers and 64 hidden nodes per layer with rectified linear unit (ReLU) non-linearity. An input window with 15 left frames and 5 right frames was used. The LSTM and GRU models were built with 2 hidden layers and 64 hidden nodes per layer. For the GRU KWS model, the final GRU layer was followed by a fully connected layer with ReLU non-linearity. There were no stacked frames in the input for the LSTM and GRU models. The smoothing window for Deep KWS was set to 20 frames. We also trained a TDNN-based acoustic model using $\sim$3000 hours of speech data to perform frame-level alignment before KWS model training. \subsection{Experimental Setup} In the neural network models, all the weight matrices were initialized with the normalized initialization~\cite{Glorot2010Understanding} and the bias vectors were initialized to 0. We used ADAM~\cite{kingma2014adam} as the optimization method while we decayed the learning rate from 1e-3 to 1e-4 after it converged. Gradient norm clipping to 1 was applied, together with L2 weight decay 1e-5. The positive training sample has a frame length of $T=1.9$ seconds which ensures the entire wake-up word is included. Accordingly, in the attention models, the input window has set to 189 frames to cover the length of the wake-up word. We randomly selected 189 contiguous frames from the negative example set to train the attention models. At runtime, the sliding window was set to 100 frames and frame shift was set to 1. Performances were measured by observing the FRR at the operating threshold of 1.0 FA per hour, while plotting a receiver operating curve (ROC). \subsection{Impact of attention mechanism} From Table~\ref{tab:attention} and Fig.~\ref{fig:attention}, we can clearly see the superior performances of the attention models. With similar size of parameters, the proposed attention models outperform the Deep KWS systems by a large margin. We also note that GRU is preferred over LSTM in both Deep KWS and the attention models. Not surprisingly, the soft attention-based model achieves the best performance. At 1.0 FA/hour, the GRU attention model reduces the FRR from 6.38\% (GRU Deep KWS) down to 1.93\% with a remarkable false rejection reduction. \begin{table}[t] \caption{\label{tab:RNN} {\it Performance of different encoder architectures with soft attention. FRR is at 1.0 false alarm (FA) per hour.}} \vspace{2mm} \footnotesize \centerline{ \begin{tabular}{ c c c c c } \hline \textbf{Recurrent Unit} & \textbf{Layer} & \textbf{Node} & \textbf{FRR (\%)} & \textbf{Params (K)} \\ \hline \hline LSTM & $1$ & $64 $ & $4.36 $ & $31.2$ \\ LSTM & $2$ & $64 $ & $3.58 $ & $64.3$ \\ LSTM & $3$ & $64 $ & $3.05 $ & $97.3$ \\ LSTM & $1$ & $128$ & $2.99 $ & $103 $ \\ \hline GRU & $1$ & $64 $ & $3.22 $ & $28.7$ \\ GRU & $2$ & $64 $ & $1.93 $ & $53.4$ \\ GRU & $3$ & $64 $ & $1.99 $ & $78.2$ \\ GRU & $1$ & $128$ & $\textbf{1.49}$ & $77.5$ \\ \hline \end{tabular} }\vspace{-10pt} \end{table} \begin{table}[t] \caption{\label{tab:CRNN} {\it Performance of adding convolutional layers in the GRU (CRNN) attention-based model with soft attention. FRR is at 1.0 false alarm (FA) per hour.}} \vspace{2mm} \footnotesize \centerline{ \begin{tabular}{ c c c c c c } \hline \textbf{Channel} & \textbf{Layer} & \textbf{Node} & \textbf{FRR (\%)} & \textbf{Params (K)} \\ \hline \hline $8 $ & $1$ & $64$ & $2.48$ & $52.5$ \\ $8 $ & $2$ & $64$ & $1.34$ & $77.3$ \\ $16$ & $1$ & $64$ & $\textbf{1.02}$ & $84.1$ \\ $16$ & $2$ & $64$ & $1.29$ & $109 $ \\ \hline \end{tabular} }\vspace{-10pt} \end{table} \subsection{Impact of encoder architecture} We further explored the impact of encoder architectures with soft attention. Results are summarized in Table~\ref{tab:RNN}, Fig.~\ref{fig:lstm-attention} and Fig.~\ref{fig:gru-attention}. From Table~\ref{tab:RNN}, we notice that the bigger models always perform better than the smaller models. Observing the LSTM models, the 1-128 LSTM model achieves the best performance with an FRR of 2.99\% at 1.0 FA/hour. In Fig.~\ref{fig:lstm-attention}, the ROC curves of the 1-128 LSTM model and the 3-64 LSTM model are overlapped at lower FA per hour. This means making the LSTM network wider or deeper can achieve the same effect. However, observing Fig.~\ref{fig:gru-attention}, the same conclusion does not hold for GRU. The 1-128 GRU model presents a significant advantage over the 3-64 GRU model. In other words, increasing the number of nodes may be more effective than increasing the number of layers. Finally, the 1-128 GRU model achieves 1.49\% FRR at 1.0 FA/hour. \subsection{Adding convolutional layer} Inspired by \cite{arik2017convolutional}, finally we studied the impact of adding convolutional layers in the GRU attention-model as convolutional network is often used as a way to extract invariant features. For the CRNN attention-based model, we used one layer CNN that has a $\frac{C(20\times5)}{1\times2}$ filter. We explored different numbers of output channel and results are summarized in Table~\ref{tab:CRNN} and Fig.~\ref{fig:crnn-attention}. From Table~\ref{tab:CRNN}, we can see that adding convolutional layer can further improve the performance. We achieve the lowest FRR of 1.02\% at 1.0 FA/hour with 84.1K parameters. Another observation is that 16-channel models work better than 8-channel models. By increasing layers, the 8-2-64 model achieves a great gain over the 8-1-64 model. But we cannot observe extra benefit when increasing the layers with 16-channel models. As a summary, Fig.~\ref{fig:rnn-attention} plots the ROC curves for the best three systems. We can see that GRU and CRNN outperform LSTM by a large margin and the best performance is achieved by CRNN. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{lstm-attention.jpg} \caption{ROCs for LSTM Attention-based model with soft attention.}\vspace{-10pt} \label{fig:lstm-attention} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{gru-attention.jpg} \caption{ROCs for GRU Attention-based model with soft attention.}\vspace{-10pt} \label{fig:gru-attention} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{crnn-attention.jpg} \caption{ROCs for CRNN Attention-based model with soft attention.}\vspace{-10pt} \label{fig:crnn-attention} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{rnn-attention.jpg} \caption{ROCs for different architectures with soft Attention-based model.}\vspace{-10pt} \label{fig:rnn-attention} \end{figure} \section{Conclusions} In this paper, we propose an attention-based end-to-end model for small-footprint keyword spotting. Compared with the Deep KWS system, the attention-based system achieves superior performance. Our system consists of two main sub-modules: the encoder and the attention mechanism. We explore the encoder architectures, including LSTM, GRU and CRNN. Experiments show that GRU is preferred over LSTM and the best performance is achieved by CRNN. We also explore two attention mechanisms: average attention and soft attention. Our results show that the soft attention has a better performance than the average attention. With $\sim$84K parameters, our end-to-end system finally achieves 1.02\% FRR at 1.0 FA/hour. \section{Acknowledgements} The authors would like to thank Jingyong Hou for helpful comments and suggestions. \clearpage \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,505
\section*{Abstract} \begin{abstract} Effective gravity and gauge fields are emergent properties intrinsic for low-energy quasiparticles in topological semimetals. Here, taking two Dirac semimetals as examples, we demonstrate that applied lattice strain can generate warped spacetime, with fascinating analogues in astrophysics. Particularly, we study the possibility of simulating black-hole/white-hole event horizons and gravitational lensing effect. Furthermore, we discover strain-induced topological phase transitions, both in the bulk materials and in their thin films. Especially in thin films, the transition between the quantum spin Hall and the trivial insulating phases can be achieved by a small strain, naturally leading to the proposition of a novel piezo-topological transistor device. Possible experimental realizations and analogue of Hawking radiation effect are discussed. Our result bridges multiple disciplines, revealing topological semimetals as a unique table-top platform for exploring interesting phenomena in astrophysics and general relativity; it also suggests realistic materials and methods to achieve controlled topological phase transitions with great potential for device applications. \end{abstract} \noindent {Keywords: Topological semimetal, Strain effects, Artificial gravity field, Topological phase transitions, Topological transistor device} \section*{Introduction} Relativity is a fundamental aspect for all elementary particles in the high energy regime. In condensed matter physics, however, the relevant energy scale we probe is much lower (compared with e.g. the electron rest mass), hence the electronic dynamics is usually considered as non-relativistic. Nonetheless, due to interactions with lattice and between electrons themselves, the electron properties in crystalline solids are strongly renormalized, and the resulting low-energy electron quasiparticles can behave drastically different from free electrons. Remarkably, in a class of recently discovered topological semimetal materials, the band structures feature nontrivial band-crossings close to the Fermi level, around which the low-energy quasiparticles become massless and resemble relativistic particles. For example, in so-called Weyl semimetals, the Fermi surface consists of isolated band-crossing points, each carrying a topological charge of $\pm 1$ corresponding to its chirality, and the low-energy quasiparticles mimic the Weyl fermions in high-energy physics~\cite{Wan2011,Murakami2007}. With further protection from crystalline symmetry, a pair of Weyl points can be stabilized at the same point (called the Dirac point) in the energy-momentum space, realizing the Dirac semimetal (DSM) phase~\cite{Young2012}. A number of 3D materials have been predicted to host Weyl/Dirac points~\cite{Burkov2011,Lu2013,Xu2014,Yang2014,Weng2015,Huang2015,Li2017}. Some of them, including the DSMs Na$_3$Bi and Cd$_3$As$_2$~\cite{Wang2012,Wang2013}, have been confirmed in recent experiments~\cite{Liu2014,Borisenko2014,Liu2014a,Jeon2014,Xu2015,Lv2015a,Shekhar2015,Yang2015}. These topological semimetals offer a versatile platform for simulating relativistic particles and their many fascinating phenomena~\cite{Nielsen1983,Son2012,Lundgren2014}. Indeed, the nontrivial topology of the band-crossing point dictates the emergence of Lorentz symmetry for the low-energy quasiparticles~\cite{Volovik2003,Horava2005,Zhaoyx2013,Cortijo2016}: the inverse quasiparticle propagator can be written in a general and manifestly covariant form \begin{equation}\label{Ginv} \mathcal{G}^{-1}=\sigma^\alpha e^\mu_\alpha (p_\mu-p_\mu^{(0)}), \end{equation} where $p_\mu$ is the covariant energy-momentum four-vector, $p_\mu^{(0)}$ corresponds to the location of the Weyl point, $\sigma^\alpha=(1,\bm\sigma)$ with $\bm \sigma$ the vector of Pauli matrices, and all material-specific model parameters are encoded in the tetrad field $e^\mu_\alpha$. Consequently, the relativistic spectrum follows the equation: \begin{equation}\label{Spec} g^{\mu\nu}(p_\mu-p_\mu^{(0)})(p_\nu-p_\nu^{(0)})=0, \end{equation} where $g^{\mu\nu}=\eta^{\alpha\beta} e^\mu_\alpha e^\nu_\beta$ ($\eta^{\alpha\beta}=\textup{diag}(-1,1,1,1)$) plays the role of an effective spacetime metric and $p_\mu^{(0)}$ acts as an effective $U(1)$ gauge potential, constituting the ``fermionic vacuum" where the low-energy quasiparticles live. Here $g^{\mu\nu}$ and $p_\mu^{(0)}$ are fully determined by the material band structure, and their spatial and temporal variation may give rise to effective gravity and gauge fields respectively. As a direct manifestation of the emergent relativistic symmetry, the effective spacetime metric $g^{\mu\nu}$ offers intriguing possibility to probe effects from general relativity in solid-state systems. Such possibility has not been explored so far. Meanwhile, topological semimetals represent the neighbor state to various topological quantum phases. Particularly, DSMs are long believed to be an ideal platform for the systematic study of topological phase transitions (TPTs). However, a realistic approach to achieve controllable and reversible TPTs is still missing. Such an approach is much desired because of its significance in both fundamental physics investigation and in potential topological device applications. In this work, we show that both objectives mentioned above can be achieved via lattice strain in topological semimetals. As a notable advantage, the solid-state systems admit an easy tuning of their properties by strain. Taking two DSMs as concrete examples and with first-principles calculations, we demonstrate that the quasiparticle spectrum can be efficiently controlled by uniaxial strain. We show that an inhomogeneous strain profile can generate warped spacetime with fascinating analogues in astrophysics. Particularly, we analyze the possibility to simulate black-hole/white-hole event horizons and gravitational lensing effect. Furthermore, we find that a larger strain can completely change the fermionic vacuum, leading to a TPT between DSM and trivial insulator phases, during which the two Dirac points collide at the $\Gamma$ point and pair-annihilate. More importantly, for a quasi-2D DSM thin film, a small strain is sufficient to control a TPT between quantum spin Hall (QSH) and trivial insulator phases. This is regarded as a key to the realization of a topological transistor. The discovery here allows us to propose a novel piezo-topological transistor device. Thus our work not only establishes bridges between distinct disciplines such as condensed matter physics, astrophysics, and general relativity, it also reveals realistic platform and methods to study the intriguing topological phase transitions and to achieve unprecedented device functionalities for applications. \section*{Results} \subsection{Strain tuning and TPT in bulk material.} In this work, we take the two DSMs Na$_3$Bi and Cd$_3$As$_2$ as concrete examples to demonstrate our general idea. These two materials represent the first two topological semimetals that have been confirmed by experiment~\cite{Liu2014,Borisenko2014,Liu2014a,Jeon2014}. We choose them as examples because they share a relatively simple low-energy bulk band structure. Both materials have a single pair of Dirac points located at $k_z=\pm k_D$ on the $k_z$-axis near the $\Gamma$ point of the Brillouin zone~\cite{Wang2012,Wang2013}. Each Dirac point is four-fold degenerate, consisting of two Weyl points of opposite chirality. The decoupling of the two Weyl points (hence the stability of the Dirac point) is protected by the $C_3$ ($C_4$) rotational symmetry of Na$_3$Bi (Cd$_3$As$_2$). The band ordering is inverted at the $\Gamma$ point, as it should be for the realization of band-crossing points. In this work, we focus on the uniaxial strain along the crystalline $c$-axis ($z$-direction) (see Figure 1a), which preserves the corresponding rotational symmetry hence the existence of Dirac points. The low-energy effective model can be written for each Dirac point. For example, the quasiparticles around the Dirac point at $k_z=+k_D$ are described by two copies of the Weyl Hamiltonian~\cite{Wang2012,Wang2013}: \begin{equation}\label{Heff} \mathcal{H}_\pm=\pm v_\bot k_x\sigma_x +v_\bot k_y \sigma_y+v_z (k_z-k_D)\sigma_z +w (k_z-k_D), \end{equation} each with a definite chirality corresponding to the subscript of $\mathcal{H}$. Here $v_\bot$ and $v_z$ are the Fermi velocities in the $xy$-plane and along the $z$-axis respectively (we set $\hbar=1$), and the last term in Eq.(\ref{Heff}) tilts the spectrum along $k_z$. The model for the other Dirac point at $k_z=-k_D$ can be simply obtained from (\ref{Heff}) by a time reversal operation. Under moderate uniaxial strain, the form of model (\ref{Heff}) is preserved, only the model parameters change with strain. The parameters can be evaluated by fitting the first-principles band structure. Since the two materials show qualitatively similar behavior, in the following presentation, our discussion will be mainly based on the results of Na$_3$Bi. (The first-principles calculation method and the results for Cd$_3$As$_2$ are presented in the Supplementary Information.) The band structures of Na$_3$Bi for several representative strains are shown in Figure 1b-e. Here strain $\varepsilon=\frac{\ell-\ell_0}{\ell_0}$ where $\ell$ is the lattice parameter along $c$-axis and $\ell_0$ is its equilibrium value. The values of model parameters versus strain are plotted in Figure 2. Importantly, one observes that with compressive strain, $k_D$ decreases and approaches zero at a critical strain $\varepsilon_c\sim -6.2\%$. This means that the locations of the two Dirac points are shifted towards the $\Gamma$ point by strain and collide with each other at $\varepsilon_c$. Beyond this point, the effective model (\ref{Heff}) is no longer valid. From the first-principles band structure (Figure 1c-e), one can see that the two Dirac points annihilate with each other and eventually a finite gap is opened in the spectrum, leading to a band insulator at large strains. In this process, the direct gap at the $\Gamma$ point, which indicates the strength of band inversion, shrinks and changes sign at $\varepsilon_c$, marking a reversal of band ordering around the $\Gamma$ point (from inverted ordering to normal ordering). Thus, the transition at $\varepsilon_c$ represents a TPT from a DSM phase to a trivial insulator phase (the change in Fermi surface topology means that it is also a Lifshitz transition). Before proceeding, we note that, first, the value of critical strain is correlated with the band inversion strength: Cd$_3$As$_2$ has a smaller inverted gap at the $\Gamma$ point than Na$_3$Bi, hence its critical strain ($\sim-1.3\%$) is also smaller, making the TPT comparatively easier to achieve. Second, the TPT completely changes the fermionic vacuum, from that of massless Dirac particles to massive particles with a finite excitation gap, which should be easily probed in spectroscopic or transport experiment. \subsection{Artificial gravity field and astrophysical analogues.} Now let's consider the regime of small strains, for which the system lies within the DSM phase away from the TPT, such that the quasiparticles are well-described by model (\ref{Heff}). Artificial fields are generated when we allow spacetime variation of the applied strain. Here we focus on static strain profiles and require that the strain is slowly-varying on the scale of lattice constant, i.e. $|\nabla \varepsilon|\ll \ell_0^{-1}$, such that the strain effect can be captured by a local Hamiltonian with spatially-dependent parameters (as in Eq.(\ref{Heff})), forming a smooth background where the quasiparticles move around. Below, in discussing the quasiparticle propagation in the presence of a nontrivial spacetime metric, we adopt a quasi-classical description, which would further require that the strain is slowly-varying compared with the Fermi wavelength ($|\nabla \varepsilon|\ll \lambda_F^{-1}$). The effective spacetime metric can be obtained by a direct comparison of model (\ref{Heff}) with Eqs.(\ref{Ginv}) and (\ref{Spec}). Then the coordinate differential, which characterizes the spacetime geometry, can be obtained as \begin{equation} ds^2=g_{\mu\nu}dx^\mu dx^\nu=-dt^2+\frac{1}{v_\bot^2}(dx^2+dy^2)+\frac{1}{v_z^2}(dz-wdt)^2. \end{equation} Interestingly, one observes that the tilt parameter $w$ mixes the space and time components. Its effect is like viewing a untilted spectrum (with $w=0$) in a moving reference frame with speed $w$. Indeed, in retrospect, one realizes that the tilt term in Eq.(\ref{Heff}) is just the Doppler shift when (Galilean) transformed to the moving frame. With inhomogeneous static strain, the parameters $v_\bot$, $v_z$ and $w$ become functions of spatial coordinates. In the following, we consider two simple example configurations. In the first example, we focus on the quasiparticle motion along the $z$-direction, assuming parameters depend only on $z$ and ignoring the $x$ and $y$ coordinates. With a general coordinate transformation $\bar{t}=t+\int^z w(z')dz'/[v_z^2(z')-w^2(z')]$, the effective metric can be written as \begin{equation}\label{1D} ds^2=-\left[1-\left(\frac{w}{v_z}\right)^2\right]d\bar{t}^2+\frac{1}{v_z^2}\frac{dz^2}{\left[1-\left(\frac{w}{v_z}\right)^2\right]}. \end{equation} One notes that this metric shares the same form as the radial part of the familiar Schwarzschild metric for a spherical gravitating source~\cite{Volovik2003,Cheng2010}. From the analogy, one can directly obtain the effective gravitational potential $\Phi(z)=-\frac{1}{2}(\frac{w}{v_z})^2$ (here defined as dimensionless, as in units of $v_z^2(\infty)$) and the corresponding gravitational field $-\frac{d}{dz}\Phi \hat{z}$. The analogy can be made more precise if we design the strain profile such that $\Phi(z)\propto -\frac{1}{z}$ (which can be done in certain region excluding the $z=0$ singularity), simulating the gravity of an object located at $z=0$ with a mass of $-z\Phi(z)/G$, where $G$ is the Newton's constant. The Schwarzschild metric has a coordinate singularity at the so-called Schwarzschild radius corresponding to an event horizon, where the space-like and time-like coordinates switch roles~\cite{Cheng2010}. One naturally speculates the possibility of similar physics here. In Eq.(\ref{1D}), this occurs where the value $|w/v_z|$ crosses 1. The underlying physics can be easily understood as in Figure 3a. Assuming that $v_z,w>0$, and $w/v_z>1(<1)$ for $z<z_h(>z_h)$ denoted as region A (B). Then from the quasiparticle spectrum, in region B, we have both left- and right-propagating modes; whereas in region A, since the tilt $w$ dominates over the Fermi velocity $v_z$, the spectrum is tipped over, as a result, only the right-propagating modes exist (see Figure 3a top panel). This means that any quasiparticle in region A must cross the point $z=z_h$ and be emitted into region B. Therefore this point represents a white-hole horizon for the quasiparticles. In addition, due to time reversal, the quasiparticles at the other Dirac point would observe $z=z_h$ as a black-hole horizon: a particle crosses the horizon from region B to region A cannot get back (see Figure 3a bottom panel). For the two DSM materials considered here, we do not find the case with $|w/v_z|>1$ (at least for the uniaxial strain considered here). In fact, this case corresponds to so-called type-II Weyl/Dirac points, which has recently been predicted in several materials~\cite{Soluyanov2015,Xu2015b,Ruan2016,Chang2016}. Hence according to our analysis, an event horizon can in principle be realized at the boundary between type-I and type-II regions. It should be noted that for type-II materials, the Fermi surface geometry completely changes, typically involving multiple electron and hole pockets. The above discussion holds only for the quasiparticles close to the Weyl/Dirac point; for quasiparticples away from the point, they will not necessarily perceive the event horizons. Nevertheless, the horizon in such case still has physical meaning as the phase boundary separating two fermionic vacua with different topology of the spectrum: with Fermi point on one side and with Fermi surface on the other side. In the second example, we focus on the quasiparticle motion in the $xy$-plane (ignoring the $z$ dimension) and consider the analogue of gravitational lensing effect. Then in terms of polar coordinates, we find that \begin{equation} ds^2=-\left[1-\left(\frac{w}{v_z}\right)^2\right]dt^2+\frac{1}{v_\bot^2}(d\rho^2+\rho^2 d\phi^2). \end{equation} Following the standard procedure~\cite{Cheng2010}, one finds the quasiparticle effective ``speed of light" viewed by a remote observer at $\rho=\infty$, \begin{equation} c(\bm \rho)=v_\bot\sqrt{1-\left(\frac{w}{v_z}\right)^2}. \end{equation} Then the propagation of the quasiparticle can be conveniently described by an effective vacuum index of refraction $n(\bm \rho)=c(\rho=\infty)/c(\bm\rho)$, naturally leading to the bending of particle trajectories (corresponding to the geodesic in the warped spacetime) with inhomogeneous strains, like in geometric optics. One notes that here the same $n$ holds for both Dirac points. The variation of $n$ versus strain is plotted in Figure 3b. To simulate the real gravitational lensing in astronomy, one can design a strain profile (using Figure 3b), such that $n(\rho)=[1+2\Phi(\rho)]^{-1}$ with $\Phi(\rho) \propto -\frac{1}{\rho}$ in a region excluding the singularity at $\rho=0$. Assuming the quasiparticle trajectory lies in this region and the variation of $n$ is small and smooth, the classical gravitational lensing result directly applies~\cite{Cheng2010}: the deflection angle $\delta \phi$ of a quasiparticle trajectory (see Figure 3c) with the closest approaching distance $\rho_{min}$ ($\approx$ the impact parameter) is given by \begin{equation} \delta\phi\approx -4\Phi(\rho_{min}). \end{equation} Interestingly, since one can control both the vacuum state at $\rho=\infty$ and the sign of applied strain, it is possible to realize $n(\rho)<1$, which corresponds to an anti-gravitating source, for which the quasiparticle trajectories are repelled from the source (Figure 3d). \subsection{TPT in thin film and piezo-topological transistor.} The TPT in the bulk typically requires a large strain (which is still within the linear elastic regime for the two materials, as shown in Supplementary Information). In the following, we show that a related TPT is more readily achievable in a DSM thin film by small strains. First of all, one notes that in the bulk band structure of both DSMs, the gap is inverted in-between the two Dirac points, i.e., considering a 2D slice of the bulk Brillouin zone perpendicular to the $k_z$-axis, its gap is inverted if the slice lies in-between the two Dirac points, and is non-inverted otherwise. For a Na$_3$Bi (or Cd$_3$As$_2$) thin film confined in $z$-direction~\cite{Hellerstedt2016,Moll2016}, the electron motion along $z$ is quantized into discrete quantum well levels, forming quantum well subbands in the spectrum~\cite{Xiao2015,Pan2015}. The resulting system generally becomes semiconducting. In the quantum well approximation~\cite{Liu2010}, each subband corresponds to a 2D slice in the original 3D band structure with an effective wave-vector $(\tilde{k}_z)_m=m\pi/L$, where the integer $m(=1,2,\cdots)$ labels the subbands and $L$ is the film thickness. Thus the gap of the $m$th-subband is inverted (non-inverted) if $(\tilde{k}_z)_m<k_D(>k_D)$. Each inverted subband contributes a nontrivial 2D $\mathbb{Z}_2$-invariant~\cite{Shen2012}, so that the quasi-2D thin film becomes a QSH insulator if there is an odd number of inverted subbands, and is a trivial insulator if this number is even. This mechanism has been revealed in the topological phase oscillation versus $L$~\cite{Wang2013,Xiao2015}. Now, under uniaxial strain, both $k_D$ and $(\tilde{k}_z)_m$ will change. Consider the case when the $m$th-subband has the smallest gap and has $(\tilde{k}_z)_m<k_D$. With compressive strain, $(\tilde{k}_z)_m$ increases, whereas $k_D$ decreases. So there must exist a critical strain $\tilde{\varepsilon}_c$ where $(\tilde{k}_z)_m$ and $k_D$ cross each other (see Figure 4a). In the process, the subband gap closes and re-opens with a switch of band ordering. This changes the $\mathbb{Z}_2$ character of the whole system by 1, leading to a TPT between a QSH insulator and a trivial insulator phases, as illustrated in Figure 4b. Since this TPT does not need $k_D$ to vanish as in the bulk case, the required strain can be much smaller (in the example shown in Figure 4, the critical strain is $\sim-1\%$). The QSH state possesses topologically-protected gapless edge channels, in which the carriers can transport without back-scattering~\cite{Hasan2010,Qi2011}. This leads to the proposition of so-called topological transistor based on QSH channel materials, which is expected to have the advantages of fast operating speed, low heat dissipation, and low power consumption~\cite{Qian2014}. So far, QSH state has been confirmed only in a few quantum well structures~\cite{Hasan2010,Qi2011}, and how to reliably control the TPT (hence the switch between on and off states) remains a challenge for designing topological transistor~\cite{Pan2015}. Our discovery here points to a promising novel device---a piezo-topological transistor. As illustrated in Figure 5, this device has a DSM thin film as the channel. Suppose the thin film without strain is in the trivial insulator phase, for which the transistor is at the off state (Figure 5a). By applying a small strain, the layer can be driven to the QSH phase, with current conduction through topological edge channels, corresponding to the on state (Figure 5b). Like previous proposals, this device enjoys advantages such as low dissipation and robust operation; besides, the sensitivity to strain makes it promising for electromechanical sensing applications. \section*{Discussion} The emergence of relativistic spectrum and Lorentz invariance at low energy is a general feature dictated by the Fermi point topology. Here we take the two specific materials as examples, but the underlying physics is general and applies to other topological semimetals as well. Similar relativistic spectrum was previously discussed in the superfluid $^3$He-A phase, where analogues of black holes were suggested by controlling vortices and background superfluid flow~\cite{Unruh1981,Volovik2003,Volovik2016}. In a recent experiment, an artificial black hole for acoustic waves was simulated in an accelerating atomic Bose-Einstein condensate at sub-Kelvin temperature~\cite{Steinhauer2016}. In comparison, the solid-state system studied here can work at room temperature and permits much easier control by static means like strain or external fields, hence the predicted effects should be more readily observable. Many experimental techniques have been developed for engineering strain, such as nanoindentation, micro-compression testing, and by using profiled substrate or stretchable substrate for thin film structures. Specifically, regarding the two examples with inhomogeneous strain profiles discussed here, the first one may be realized by using a setup similar to the micro-pillar compression test~\cite{Jiang2010}, and the strain variation along vertical direction could be achieved either by engineering the geometric shape of the pillar or by applying profiled lateral constriction to the side of the pillar. For the second example, an $xy$-dependent strain profile may be achieved by a setup similar to nanoindentation, e.g. by using a hard tip with engineered shape to press onto the Dirac semimetal slab. We notice that this kind of setup has recently been utilized to study the pressure-induced superconductivity in Cd$_3$As$_2$~\cite{Wang2016a}. Furthermore, we point out that the lattice strain would generally be inhomogeneous in real strained samples, and more complicated strain profiles could be naturally realized. Given that the current technology can map out strain distribution with nanometer spatial resolution and with a precision $\sim0.1\%$~\cite{Hytch2008}, as long as the strain is slowly-varying, one can compute the effective metric and study the quasiparticle propagation under artificial gravity using the idea presented here. We have mentioned that the spatial and/or temporal variation of the Weyl/Dirac point location will give rise to effective gauge fields. Here, $k_D$ in Eq.(\ref{Heff}) acts as the $z$-component of a vector potential, its spatial variation generally leads to a pseudo-magnetic field in the $xy$-plane. For example, for a strain varying along the $x$-direction, the induced pseudo-magnetic field is along the $y$-direction, with a magnitude given by $B=\frac{1}{e}\frac{dk_D}{d\varepsilon}\frac{d\varepsilon}{dx}$. For a strain variation of $3\%$ over 10 nm, the field strength can be up to 16.5 T for Na$_3$Bi. It should be noted that the field direction is reversed for the other Dirac point, as required by the time reversal symmetry. Similar pseudo-magnetic field has been studied particularly in strained graphene~\cite{Levy2010}, and recently also discussed in topological semimetals~\cite{Cortijo2015,Pikulin2016}. Our treatment of the artificial gravity here is within a quasi-classical approach, which leaves out possible analogues of interesting quantum mechanical effects in curved spacetime. For example, Hawking radiation~\cite{Hawking1974}, the radiation of particles from the black-hole horizon stimulated by quantum vacuum fluctuation, may find analogue effects here. Assuming the presence of a black-hole horizon in a topological semimetal setting (as in Figure 3a), the quantum tunneling process as illustrated in Figure 6 can be interpreted as the creation of a pair of fermions due to quantum fluctuation: a quasiparticle outside the horizon and a quasihole inside the horizon. The quasiparticle escapes from the horizon, resembling the Hawking radiation. With the metric in Eq.(\ref{1D}) and by direct analogy~\cite{Hawking1974,Volovik2003}, one can find the associated Hawking temperature $T_\mathrm{H}=\frac{\hbar}{2\pi k_B}|\frac{d}{dz}(v_z-w)|_{z_h}$ for the radiation. Nevertheless, an accurate study of these quantum effects and the methods to probe them is beyond the scope of this work and deserves a future investigation. Finally, we note that a similar semimetal-to-insulator TPT was predicted in bulk Na$_3$Bi$_{1-x}$Sb$_x$ and Cd$_3$[As$_{1-x}$P$_x$]$_2$ alloys by tuning the substitute concentrations~\cite{Narayan2014}. In comparison, the strain approach here has the obvious advantage of allowing a reversible and continuous TPT to be achieved. For application purpose, the TPT in the thin film seems more appealing because of the dissipationless and spin-filtered transport associated with the QSH edge channels. By controlling the spatial strain profile, one can imagine to make a thin film with patterned QSH regions and trivial insulating regions, forming a topological circuit with designed 1D helical spin channels. It will provide an ideal platform for integrating various topological devices and functionalities to achieve unprecedented performance. \begin{addendum} \item [Acknowledgements] The authors thank D.L. Deng for helpful discussions. This work was supported by the MOST Project of China (Nos 2014CB920903 and 2016YFA0300603), the National Natural Science Foundation of China (Grant Nos 11574029, 11225418, 11374009, 61574123, and 21373184), National Key Basic Research Program of China (2012CB825700), and Singapore MOE Academic Research Fund Tier 1 (SUTD-T1-2015004) and Tier 2 (MOE2015-T2-2-144). \item [Author Contributions] S.G., G.-B.L., and Y.H.L. performed the first-principles calculation and the data analysis. Z.-M.Y., Y.L., L.D., and S.A.Y. performed the analytical modeling and calculation. Y.Y. and S.A.Y. supervised the work. All authors contributed to the discussion and reviewed the manuscript. \item [Competing Interests] The authors declare no competing financial interests. \item [Correspondence] Correspondence should be addressed to Yugui Yao or Shengyuan A. Yang. \item [Additional Information] Supplementary information (including the details of the first-principles method, the DFT results for Cd$_3$As$_2$, mechanical properties of the two DSMs, and the low-energy effective model) is available in the online version of the paper. \end{addendum}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,763
\section{Introduction} Let $S$ be a sequence of elements in a finite group $G$ of order $n$, written multiplicatively. We say that $S$ \emph{represents} $G$ if every element of $G$ can be expressed as the (ordered) product of a subsequence of~$S$. Ideally, we want~$S$ to be short, say $k=d\log_2 n$ for some constant $d$ known as the \emph{density} of $S$. In order for $S$ to represent $G$, we clearly require $d\ge 1$, and for sufficiently large~$n$, any $d>1$ suffices. More precisely, Babai and Erd\H{o}s~\cite{babai-erdos} show that for all \[ k \ge \log_2 n + \log_2 \log n + 2 \] there exists a sequence $S$ of length $k$ that represents~$G$. Their proof is non-constructive, but, in the case that $G$ is abelian, Erd\H{o}s and R\'{e}nyi~\cite{erdos-renyi} show that a randomly chosen sequence of length \[ k = \log_2 n + \log_2 \log n + \omega_n \] represents $G$ with probability approaching $1$ as $n\to\infty$, provided that $\omega_n\to\infty$. The randomness assumption is necessary, since it takes much larger values of $k$ to ensure that \emph{every} sequence of length $k$ represents $G$, see~\cite{eggleton-erdos,white}. In related work, Impagliazzo and Naor prove that for a random sequence~$S$ of density $d>1$, the distribution of subsequence products almost surely converges to the uniform distribution on $G$ as $n$ goes to infinity~\cite[Proposition~4.1]{impagliazzo-naor}. This result allows us to bound the complexity of our algorithm for almost all~$S$ with $d > 4$. \medskip Given a sequence $S$ that represents $G$ (or a large subset of $G$), we wish to find an explicit representation of a given group element $z$ as the product of a subsequence of $S$; we call this a \emph{short product representation} of~$z$. In the special case that $G$ is abelian and the elements of $S$ are distinct, this is the \emph{subset sum problem} in a finite group. Variations of this problem and its decision version have long been of interest to many fields: complexity theory~\cite{karp}, cryptography~\cite{merkle-hellman}, additive number theory~\cite{babai-erdos}, Cayley graph theory~\cite{alon-milman}, and information theory~\cite{alon-barak-manber}, to name just a few. As a computational framework, we work with a generic group $G$ whose elements are uniquely identified, and assume that all group operations are performed by a black box that can also provide random group elements; see~\cite[Chapter~1]{sutherland:thesis} for a formal model. Time complexity is measured by counting group operations (calls to the black box), and for space complexity we count the number of group elements that are simultaneously stored. In most practical applications, these metrics are within a polylogarithmic factor of the usual bit complexity. Working in this model ensures that our algorithms apply to any finite group for which a suitable black box can be constructed. It also means that finding short product representations is provably hard. Indeed, the discrete logarithm problem in a cyclic group of prime order has a lower bound of $\Omega(\sqrt{n})$ in the generic group model~\cite{shoup}, and is easily reduced to finding short product representations. In the particular group $G=\mathbb Z/n\mathbb Z$, we note that finding short product representations is easier for non-generic algorithms: the problem can be lifted to $k$ subset sum problems in $\mathbb Z$, which for suitable inputs can be solved with a time and space complexity of $O(n^{0.3113})$ via~\cite{howgravegraham-joux}, beating the $\Omega(\sqrt{n})$ generic lower bound noted above. This is not so surprising, since working with integers is often easier than working in generic groups; for instance, the discrete logarithm problem in $\mathbb Z$ corresponds to integer division and can be solved in quasi-linear time. \medskip A standard technique for solving subset sum problems in generic groups uses a baby-step giant-step approach, which can also be used to find short product representations (Section~\ref{sec:BSGS}). This typically involves $O(2^{k/2})$ group operations and storage for $O(2^{k/2})$ group elements. The space bound can be improved to $O(2^{k/4})$ via a method of Schroeppel and Shamir~\cite{schroeppel-shamir}. Here, we give a Pollard-$\rho$ type algorithm~\cite{pollard} for finding short product representations in a finite group (Section~\ref{sec:pollard}). It only needs to store $O(1)$ group elements, and, assuming $S$ is a random sequence of density $d>4$, we prove that its expected running time is $O(\sqrt{n}\log{n})$ group operations; alternatively, by dedicating $O(n^\epsilon)$ space to precomputations, the time complexity can be reduced to $O(\sqrt{n})$ (Section~\ref{sec:analysis}). We also consider two applications: representing elements of the class group of an imaginary quadratic number field as short products of prime ideals with small norm (Section~\ref{sec:relations}), and finding an isogeny between two elliptic curves defined over a finite field (Section~\ref{sec:isogenies}). For the latter, our method combines the advantages of~\cite{galbraith} and~\cite{galbraith-hess-smart} in that it requires little memory and finds an isogeny that can subsequently be evaluated in polynomial time. In practice, our algorithm performs well so long as $d \ge 2$, and its low space complexity allows it to feasibly handle much larger problem instances than other generic methods (Section~\ref{sec:comput}). \section{Algorithms} Let $S$ be a sequence of length $k$ in a finite group $G$ of order $n$, let $z$ be an element of $G$, and let $\mathcal P(S)$ denote the set of all subsequences of~$S$. Our goal is to find a preimage of $z$ under the product map $\pi:\mathcal P(S)\to G$ that sends a subsequence of~$S$ to the (ordered) product of its elements. \subsection{Baby-step giant-step}\label{sec:BSGS} Let us first recall the baby-step giant-step method. We may express~$S=AB$ as the concatenation of two subsequences of roughly equal length. For any sequence $y=(y_1,\ldots,y_m)$, let $\mu(y) = (y_m^{-1},\ldots,y_1^{-1})$, so that $\pi(y)$ and $\pi(\mu(y))$ are inverses in~$G$. We then search for $x\in\mathcal P(A)$ (a baby step) and $y\in\mathcal P(B)$ (a giant step) which ``collide'' in the sense that $\pi(x) = \pi(z\mu(y))$, where $z\mu(y)$ denotes the sequence $(z,y_m^{-1},\ldots,y_1^{-1})$. \smallskip \begin{quote} \textsc{Baby-step giant-step Algorithm}\\ \textbf{\scshape Input:} A finite sequence $S$ in a group $G$ and a target $z\in\pi(\mathcal P(S))$.\\ \textbf{\scshape Output:} A subsequence of $S$ whose product is~$z$.\\ \begin{tabular}{rl} 1. & Express $S$ in the form $S=AB$ with $\#A\approx \#B$. \\ 2. & For each $x\in\mathcal P(A)$, store $(\pi(x),x)$ in a table indexed by $\pi(x)$. \\ 3. & For each $y\in\mathcal P(B)$: \\ 4. & \qquad Lookup $\pi(z\mu(y))$ in the table computed in Step~2. \\ 5. & \qquad If $\pi(z\mu(y))=\pi(x)$ is found then output $xy$, otherwise continue. \end{tabular} \end{quote} \smallskip The table constructed in Step~2 is typically implemented as a hash table, so that the cost of the lookup in Step~4 is negligible. Elements of $\mathcal P(A)$ and $\mathcal P(B)$ may be compactly represented by bit-strings of length $\lceil k/2\rceil = O(\log n)$, which is approximately the size of a single group element. If these bit-strings are enumerated in a suitable order, each step can be derived from the previous step using $O(1)$ group operations\footnote{With a Gray code, exactly one group operation is used per step, see~\cite{knuth-art4f2}.}. The algorithm then performs a total of $O(2^{k/2})$ group operations and has a space complexity of $O(2^{k/2})$ group elements. One can make a time-space trade off by varying the relative sizes of $A$ and $B$. This algorithm has the virtue of determinism, but its complexity $O(n^{d/2})$ is exponential in the density $d$ (as well as $\log n$). For $d > 1$, a randomized approach works better: select $\sqrt{n}$ baby steps $x\in\mathcal P(A)$ at random, then select random giant steps $y\in\mathcal P(B)$ until a collision $\pi(z\mu(y))=\pi(x)$ is found. Assuming that $\pi(x)$ and $\pi(z\mu(y))$ are uniformly distributed in $G$, we expect to use $\sqrt{n}$ giant steps. To reduce the cost of each step, one may partition $A$ and $B$ each into approximately $d$ subsequences $A_i$ and $B_i$ and precompute $\pi(x)$ for all $x\in\mathcal P(A_i)$, and $\pi(\mu(y))$ for all $y\in\mathcal P(B_i)$. This yields an expected running time of $O(\sqrt{n})$ group operations, using storage for $O(\sqrt{n})$ group elements, for any fixed $d$. \subsection{A low-memory algorithm}\label{sec:pollard} In order to use the Pollard-$\rho$ technique, we need a pseudo-random function $\phi$ on the disjoint union $\mathcal C=\AA\sqcup\mathcal B$, where $\AA=\mathcal P(A)$ and $\mathcal B$ is the set $\{z\mu(y):y\in\mathcal P(B)\}$. This map $\phi$ is required to preserve collisions, meaning that $\pi(x)=\pi(y)$ implies $\pi(\phi(x))=\pi(\phi(y))$. Given a hash function $\eta:G\to\mathcal C$, we may construct such a map as $\phi=\eta\circ\pi$. Under suitable assumptions (see Section~\ref{sec:analysis}), the Pollard-$\rho$ method can then be applied. \smallskip \begin{quote} \textsc{Pollard-$\rho$ Algorithm}\\ \textbf{\scshape Input:} A finite sequence $S$ in a group $G$ and a target $z\in \pi(\mathcal P(S))$.\\ \textbf{\scshape Output:} A subsequence of $S$ whose product is~$z$.\\ \begin{tabular}{rl} 1. & Pick a random element $w\in \mathcal C$ and a hash function $\eta:G\to \mathcal C$. \\ 2. & Find the least $i > 0$ and $j \ge 0$ such that $\phi^{(i+j)}(w)=\phi^{(j)}(w)$. \\ 3. & If $j=0$ then return to Step~1. \\ 4. & Let $s=\phi^{(i+j-1)}(w)$ and let $t=\phi^{(j-1)}(w)$. \\ 5. & If $\pi(s) \ne \pi(t)$ then return to Step~1. \\ 6. & If $s\in\AA$ and $t=z\mu(y)\in\mathcal B$ then output $sy$ and terminate. \\ 7. & If $t\in\AA$ and $s=z\mu(y)\in\mathcal B$ then output $ty$ and terminate. \\ 8. & Return to Step~1. \end{tabular} \end{quote} \smallskip Step~2 can be implemented with Floyd's algorithm~\cite[Exercise~3.1.6]{knuth-art2} using storage for just two elements of $\mathcal C$, which fits in the memory space of $O(1)$ group elements. More sophisticated collision-detection techniques can reduce the number of evaluations of $\phi$ while still storing $O(1)$ elements, see~\cite{brent,sedgewick,teske}. We prefer the method of \emph{distinguished points}, which facilitates a parallel implementation~\cite{vanoorschot-wiener}. \subsection{Toy example} Let $G=(\mathbb Z/n\mathbb Z,+)$ and define $S$ as the concatenation of the sequences $A=(3^i)$ and $B=(5^i)$ for $i\in\{1,\ldots,k/2\}$. We put $n=127$ and $k=12$, implying $d\approx 1.7$. With $\mathcal C=\AA\sqcup\mathcal B$ as above, we define $\eta:G\to \mathcal C$ via \[ x\longmapsto\left\{\begin{array}{cl} (A_i)_{\{i:b_i=1\}}&\text{when }b_0=1\\ z\mu\left((B_i)_{\{i:b_i=1\}}\right)&\text{when }b_0=0 \end{array}\right. \] where $\sum_{i=0}^{k/2} b_i2^i$ is the binary representation of $96x \bmod n$. Starting from $w=(2,-5^6,-5^3,-5^2,-5)$, the algorithm finds $i=4$ and $j=6$: \smallskip \begin{center} \tiny \begin{tikzpicture} \foreach \i/\s/\x/\y in { 0/{2,-5^6,-5^3,-5^2,-5}/0/0.85, 1/{3^3,3^5}/0/1.7, 2/{2,-5^5,-5^4}/2/1.7, 3/{2,-5^6,-5^5,-5^4,-5^2,-5}/5/1.7, 4/{3^2,3^4}/7.8/1.7, 5/{2,-5^5}/9.5/1.7, 6/{3,3^2,3^5}/10.5/0.85, 7/{2,-5^2,-5}/10/0, 8/{2,-5^6,-5^4,-5^2,-5}/7.3/0, 9/{3,3^2,3^3,3^5}/6.8/0.85} \node (G-\i) at (\x,\y) {$\left(\s\right)$}; \foreach \from/\to in {0/1,1/2,2/3,3/4,4/5,5/6,6/7,7/8,8/9,9/4} \draw[black,->] (G-\from) -- (G-\to); \end{tikzpicture} \normalsize \end{center} \smallskip \noindent The two preimages of $(3^2,3^4)$ yield the short product representation \[ 2\equiv 3+3^2+3^3+3^5+5+5^2+5^4+5^5+5^6\bmod 127. \] \section{Analysis}\label{sec:analysis} The Pollard-$\rho$ approach is motivated by the following observation: if $\phi:X\to X$ is a random function on a set $X$ of cardinality $n$, then the expected size of the orbit of any $x\in X$ under the action of $\phi$ is $\sqrt{\pi n/2}$ (see~\cite{sobol-random-sequences} for a rigorous proof). In our setting, $X$ is the set $\mathcal C$ and $\phi=\eta\circ\pi$. Alternatively, since $\phi$ preserves collisions, we may regard $X$ as the set $\pi(\mathcal C)\subset G$ and use $\varphi=\pi\circ \eta$. We shall take the latter view, since it simplifies our analysis. Typically the function $\varphi$ is not truly random, but under a suitable set of assumptions it may behave so. To rigorously analyze the complexity of our algorithm, we fix a real number $d>4$ and assume that: \begin{enumerate} \item the hash function $\eta:G\to \mathcal C$ is a random oracle; \item $S$ is a random sequence of density~$d$. \end{enumerate} For any finite set $U$, let $\mathbb U_U$ denote the uniform distribution on $U$, which assigns to each subset $X$ of $U$ the value $\#X/\#U$. For any function $f:U\to V$, let $f_*\mathbb U_U$ denote the \emph{pushforward distribution} by $f$ of $\mathbb U_U$, which assigns to each subset~$Y$ of~$V$ the value \[ f_*\mathbb U_U(Y) = \frac{\#\{u\in U: f(u)\in Y\}}{\#U}. \] Assumption~(2) implies that $A$ and $B$ are both random sequences with density greater than~$2$. By~\cite[Proposition~4.1]{impagliazzo-naor}, this implies that \[ \operatorname{Prob}_A\left[ \left\|\pi_*\mathbb U_{\AA}-\mathbb U_{G}\right\|\geq n^{-c} \right]\leq n^{-c}, \] where $c=(d-2)/4 > 1/2$, and the \emph{variation distance} $\|\sigma-\tau\|$ between two distributions $\sigma$ and $\tau$ on $G$ is defined as the maximum value of $|\sigma(H)-\tau(H)|$ over all subsets $H$ of~$G$. Similarly, we have \[ \operatorname{Prob}_B\left[ \left\|\pi_*\mathbb U_{\mathcal B}-\mathbb U_{G}\right\|\geq n^{-c} \right]\leq n^{-c}. \] {}From now on we assume that $S$ is fixed and that $\pi_*\mathbb U_C$ is within variation distance $2n^{-c}$ of the uniform distribution on $G$; by the argument above, this happens with probability at least~$1-2n^{-c}$. Recall that a \emph{random oracle} $\eta:G\to \mathcal C$ is a random function drawn uniformly from $\mathcal C^G$, that is, each value $\eta(x)$ is drawn uniformly and independently from $\mathcal C$. Thus, for any $g\in G$, the distribution of $\pi(\eta(g))$ is $\pi_*\mathbb U_C$. It is then easy to verify that \[ \left\|(\eta\mapsto \pi\circ \eta)_*\mathbb U_{\mathcal C^G}-\mathbb U_{G^G}\right\|\leq 2n^{-c}. \] In other words, for a random oracle $\eta$, the function $\varphi=\pi\circ \eta$ is very close to being a random oracle (from $G$ to $G$) itself. Since $c>1/2$, we obtain, as in~\cite{pollard}, an $O(\sqrt{n})$ bound on the expectation of the least positive integer $i+j$ for which $\varphi^{(i+j)}(g)=\varphi^{(j)}(g)$, for any $g=\pi(w)\in G$. For $d > 2$, the probability that $\pi(s)\ne \pi(t)$ in Step~5 is $o(1)$, since $\mathcal C$ is then larger than~$G$ and collisions in the map $\varphi$ (and $\phi$) are more likely to be caused by collisions in $\pi$ than collisions in $\eta$. Having reached Step~6, we obtain a short product representation of~$z$ with probability $1/2$, since by results of~\cite{impagliazzo-naor} the value of $\pi(x)$ is independent of whether $x\in \AA$ or $x\in \mathcal B$. The expected running time is thus $O(k\sqrt{n})=O(\sqrt{n}\log n)$ group operations, and, as noted in Section~\ref{sec:pollard}, the space complexity is $O(1)$ group elements. We summarize our analysis with the following proposition. \begin{proposition}\label{prop:main} Let $S$ be a random sequence of constant density $d > 4$ and let $\eta:G\to \mathcal C$ be a random oracle. Then our Pollard-$\rho$ algorithm uses $O(\sqrt{n}\log n)$ expected group operations and storage for $O(1)$ group elements. \end{proposition} As in Section~\ref{sec:BSGS}, to speed up the evaluation of the product map $\pi$, one may partition $A$ and $B$ into subsequences $A_i$ and $B_i$ of length $m$ and precompute $\pi(\mathcal P(A_i))$ and $\pi(\mu(\mathcal P(B_i))$. This requires storage for $O(k2^m/m)$ group elements and speeds up subsequent evaluations of $\pi$ by a factor of $m$. If we let $m=\epsilon\log_2 n$, for any $\epsilon>0$, we obtain the following corollary. \begin{corollary} Under the hypotheses of the proposition above, our Pollard-$\rho$ algorithm can be implemented to run in expected time $O(\sqrt{n})$ using $O(n^\epsilon)$ space. \end{corollary} In our analysis above, we use a random $S$ random with $d > 4$ to prove that products of random elements of $\AA$ and $\mathcal B$ are quasi-uniformly distributed in $G$. If we directly assume that both $\pi_*\mathbb U_\AA$ and $\pi_*\mathbb U_\mathcal B$ are quasi-uniformly distributed, our analysis applies to all $d\ge 2$, and in practice we find this to be the case. However, we note that this does not apply to $d<2$, for which we expect a running time of $O(n^{(4-d)/4}\log n)$, as discussed in Section~\ref{sec:comput}. \section{Applications} As a first application, let us consider the case where $G$ is the ideal class group of an order $\mathcal{O}$ in an imaginary quadratic field. We may assume \[ \mathcal{O}=\mathbb Z+\frac{D+\sqrt{D}}{2}\mathbb Z, \] where the \emph{discriminant} $D$ is a negative integer congruent to $0$ or $1$ modulo~$4$. Modulo principal ideals, the invertible ideals of $\mathcal{O}$ form a finite abelian group $\cl\mathcal{O}$ of cardinality $h$. The \emph{class number} $h$ varies with $D$, but is on average proportional to $\sqrt{|D|}$ (more precisely, $\log h \sim \frac{1}{2}\log|D|$ as $D\to -\infty$, by Siegel's theorem~\cite{siegel}). Computationally, invertible $\mathcal{O}$-ideals can be represented as binary quadratic forms, allowing group operations in $\cl\mathcal{O}$ to be computed in time $O(\log^{1+\epsilon}|D|)$, via~\cite{schonhage-fastforms}. \subsection{Prime ideals}\label{sec:Sk} Let $\ell_i$ denote the $i$\textsuperscript{th} largest prime number for which there exists an invertible $\mathcal{O}$-ideal of norm $\ell_i$ and let $\alpha_i$ denote the unique such ideal that has nonnegative trace. For each positive integer $k$, let $S_k$ denote the sequence of (not necessarily distinct) ideal classes \[ S_k = ([\alpha_1],[\alpha_2],\ldots,[\alpha_k]). \] For algorithms that work with ideal class groups, $S_k$ is commonly used as a set of generators for $\cl\mathcal{O}$, and in practice $k$ can be made quite small, conjecturally $O(\log h)$. Proving such a claim is believed to be very difficult, but under the generalized Riemann hypothesis (GRH), Bach obtains the following result~\cite{bach-erh}. \begin{theorem}[Bach] Assume the GRH. If $D$ is a fundamental\footnote{Meaning that either $D$ is square-free, or $D/4$ is an integer that is square-free modulo~$4$.} discriminant and $\ell_{k+1} > 6\log^2|D|$, then the set $S_k$ generates $\cl\mathcal{O}$. \end{theorem} Unfortunately, this says nothing about short product representations in $\cl\mathcal{O}$. Recently, a special case of~\cite[Corollary~1.3]{expander-grh} was considered in~\cite[Theorem~2.1]{quantum-iso} which still assumes the GRH but is more suited to our short product representation setting. Nevertheless, for our purpose here, we make the following stronger conjecture. \begin{conjecture} For every $d_0 >1$ there exist constants $c > 0$ and $D_0 < 0$ such that if $D \leq D_0$ and $S_k$ has density $d \geq d_0$ then \begin{enumerate} \item $\pi(\mathcal P(S_k))=G$, that is, $S_k$ represents $G$; \item $\left\|\pi_*\mathbb U_{\mathcal P(S_k)}-\mathbb U_G\right\|<h^{-c}$; \end{enumerate} where $G$ is the ideal class group $\cl\mathcal{O}$ and $h$ is its cardinality. \end{conjecture} In essence, these are heuristic analogs to the results of Erd\H{o}s and R\'{e}nyi, and of Impagliazzo and Naor, respectively, suggesting that the distribution of the classes~$[\alpha_i]$ resembles that of random elements uniformly drawn from $\cl\mathcal{O}$. Note that (1), although seemingly weaker, is only implied by (2) when $c>1$. Empirically, (1) is easily checked: for $d_0=2$ we have verified it using $D_0=-3$ for every imaginary quadratic order with discriminant $D\geq -10^8$, and for $10^4$ randomly chosen orders with $D$ logarithmically distributed over the interval $[-10^{16},-10^{8}]$ (see Figure~\ref{fig:hyp-rnd}). Although harder to test, (2) is more natural in our context, and practical computations support it as well. Even though we see no way to prove this conjecture, we assume its veracity as a useful heuristic. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{hyp-rnd.pdf} \end{center} \caption{\small Dots plot the minimal $k$ such that $S_k$ satisfies conjecture~(1); gray dots for all discriminants $D\geq -10^8$ and black dots for ten thousand $D$ drawn at random according to a logarithmic distribution. The lines represent $k=d\log_2 h$ for $d=1,2$.} \label{fig:hyp-rnd} \end{figure} \subsection{Short relations}\label{sec:relations} In~\cite{hafner-mccurley}, Hafner and McCurley give a subexponential algorithm to find representatives of the form $\prod\alpha_i^{e_i}$ for arbitrary ideal classes of imaginary quadratic orders; the ideals $\alpha_i$ have subexponential norms, but the exponents $e_i$ can be as large as the class number~$h$. Asking for small exponents $e_i\in\{0,1\}$ means, in our terminology, writing elements $z\in G$ as short product representations on $S_k=([\alpha_i])$. Under the conjecture above, this can be achieved by our low-memory algorithm in $O(|D|^{1/4+\epsilon})$ expected time, using $k=O(\log h)$ ideals $\alpha_i$. We can even combine these approaches. If the target element $z$ is represented by an ideal of small norm, say $z=[\alpha_{k+1}]$, we get what we call a \emph{short relation} for~$\cl\mathcal{O}$. Conjecture (1) implies not only that the map that sends each vector $(e_1,\ldots,e_{k+1})\in\mathbb Z^{k+1}$ to the class of the ideal $\prod\alpha_i^{e_i}$ is surjective, but also that there exists a set of short relations generating its kernel lattice $\Lambda$. This gives a much better upper bound on the diameter of $\Lambda$ than was used by Hafner and McCurley, and their algorithm can be adapted to make use of this new bound and find, in subexponential time, representatives $\prod\alpha_i^{e_i}$ with ideals $\alpha_i$ of subexponential norm and exponents $e_i$ bounded by $O(\log|D|)$. See~\cite{bisson-grh} for details, or~\cite{quantum-iso} for an equivalent construction. \subsection{Short isogenies}\label{sec:isogenies} Now let us consider the problem of finding an isogeny between two ordinary elliptic curves $E_1$ and $E_2$ defined over a finite field $\mathbb F_q$. This problem is of particular interest to cryptography because the discrete logarithm problem can then be transported from $E_1$ to $E_2$. An isogeny between curves $E_1$ and $E_2$ exists precisely when $E_1$ and $E_2$ lie in the same \emph{isogeny class}. By a theorem of Tate, this occurs if and only if $\#E_1(\mathbb F_q)=\#E_2(\mathbb F_q)$, which can be determined in polynomial time using Schoof's algorithm~\cite{schoof-pointcounting}. The isogeny class of $E_1$ and $E_2$ can be partitioned according to the endomorphism rings of the curves it contains, each of which is isomorphic to an order $\mathcal{O}$ in an imaginary quadratic number field. Identifying isomorphic curves with their $j$-invariant, for each order $\mathcal{O}$ we define \[ \Ell\mathcal{O}=\left\{j(E) : \EndE\cong\mathcal{O}\right\}, \] where $E$ denotes an elliptic curve defined over $\mathbb F_q$. The set $\Ell\mathcal{O}$ to which a given curve belongs can be determined in subexponential time, under heuristic assumptions~\cite{bisson-sutherland}. An isogeny from $E_1$ to $E_2$ can always be decomposed into two isogenies, one that is essentially determined by $\End{E_1}$ and $\End{E_2}$ (and can be made completely explicit but may be difficult to compute), and another connecting curves that lie in the same set $\Ell\mathcal{O}$. We shall thus restrict ourselves to the problem of finding an isogeny between two elements of $\Ell\mathcal{O}$. The theory of complex multiplication states that $\Ell\mathcal{O}$ is a principal homogeneous space (a \emph{torsor}) for the class group $\cl\mathcal{O}$: each ideal $\alpha$ acts on $\Ell\mathcal{O}$ via an isogeny of degree $\operatorname{N}(\alpha)$, and this action factors through the class group. We may then identify each ideal class $[\alpha]$ with the image $[\alpha]j(E_i)$ of its action on $j(E_i)$. This allows us to effectively work in the group $\cl\mathcal{O}$ when computing isogenies from $E_i$. Galbraith addressed the search for an isogeny $E_1\toE_2$ using a baby-step giant-step approach in~\cite{galbraith}; a low-memory variant was later given in~\cite{galbraith-hess-smart} which produces an exponentially long chain of low-degree isogenies. From that, a linearly long chain of isogenies of subexponential degree may be derived by smoothing the corresponding ideal in $\cl\mathcal{O}$ using variants of the method of Hafner and McCurley (for instance, those mentioned in Section~\ref{sec:relations}); alternatively, our low-memory algorithm can be used to derive a chain of low-degree isogenies with length linear in $\log|D|$ (assuming our conjecture), and we believe this is the most practical approach. However, let us describe how our method applies naturally to the torsor $\Ell\mathcal{O}$, and directly finds a short chain of low-degree isogenies from $E_1$ to $E_2$ using very little memory. \medskip Let $S_k=AB$ be such that conjecture (1) holds, where $A$ and $B$ are roughly equal in size, and define $\mathcal C=\AA\sqcup\mathcal B$ where $\AA=\mathcal P(A)$ and $\mathcal B=\mu(\mathcal P(B))$. We view each element of $\AA$ as a short chain of isogenies of small prime degree $\ell_i=\operatorname{N}(\alpha_i)$ that originates at $E_1$; similarly, we view elements of $\mathcal B$ as chains of isogenies originating at $E_2$. Now let $\pi:\mathcal C\to \Ell\mathcal{O}$ be the map that sends $x\in\AA$ (resp. $x\in\mathcal B$) to the element of $\Ell\mathcal{O}$ that is the codomain of the isogeny chain defined by $x$ and originating at $E_1$ (resp. $E_2$). It suffices to find a collision between an element of $\AA$ and an element of $\mathcal B$ under the map $\pi$: this yields an isogeny chain from~$E_1$ and an isogeny chain from~$E_2$ that have the same codomain. Composing the first with the dual of the second gives an isogeny from $E_1$ to $E_2$. The iteration function $\phi$ on $\mathcal C$ can now be defined as the composition $\eta\circ\pi$ where~$\eta$ is a map from $\Ell\mathcal{O}$ to $\mathcal C$ that behaves like a random oracle. Using this formalism, our Pollard-$\rho$ algorithm can be applied directly, and under the conjecture it finds an isogeny in time $O(h^{1/2+\epsilon})$. In terms of space, it only needs to store $O(1)$ elements of $\cl\mathcal{O}$ and $\Ell\mathcal{O}$, which is $O(\log q)$ bits. However, in order to compute isogenies, modular polynomials $\Phi_\ell(X,Y)$ might be used, each of which requires $O(\ell^3\log\ell)$ bits. If we heuristically assume that $\ell_k = O(k\log k) = O(\log h\log\log h)$, the overall space complexity is then bounded by $O(\log^{3+\epsilon} h) = O(\log^{3+\epsilon} q)$ bits, which is polynomial in $\log q$. This can be improved to $O(\log^{2+\epsilon} q)$ bits by using the algorithm of~\cite{sutherland-point-counting} to directly compute $\Phi_\ell(j(E),Y)$ in a space-efficient manner. \section{Computations}\label{sec:comput} To test our generic low-memory algorithm for finding short product representations in a practical setting, we implemented black-boxes for three types of finite groups: \begin{enumerate} \item $G=E(\mathbb F_p)$, the elliptic curve $E:y^2=x^3+x+1$ over a finite field $\mathbb F_p$. \item $G=\cl\mathcal{O}$, where $\mathcal{O}$ is an order in an imaginary quadratic field.\footnote{We identify $\mathcal{O}$ by its discriminant $D$ and may write $\cl{D}$ instead of $\cl\mathcal{O}$.} \item $G=\gltwo{\mathbb F_p}$, the group of invertible $2\times 2$ matrices over $\mathbb F_p$. \end{enumerate} To simplify the implementation, we restricted to cases where $\mathbb F_p$ is a prime field. The groups $E(\mathbb F_p)$ are abelian groups, either cyclic or the product of two cyclic groups. The groups $\cl\mathcal{O}$ are also abelian, but may be highly non-cyclic (we specifically chose some examples with large $2$-rank), while the groups $\gltwo{\mathbb F_p}$ are non-abelian. For the groups $E(\mathbb F_p)$, we used the sequence of points $S=(P_1,\ldots,P_k)$ with $P_i=(x_i,y_i)$, where $x_i$ is the $i$\textsuperscript{th} smallest positive integer for which $x_i^3+x_i+1$ is a quadratic residue $y_i^2$ modulo $p$ with $y_i \le (p-1)/2$; our target $z$ was the point $P_{k+1}$. For the groups $\cl\mathcal{O}$, we used the sequence $S_k$ defined in Section~\ref{sec:Sk} with $z=[\alpha_{k+1}]$. For the groups $\gltwo{\mathbb F_p}$, we simply chose a sequence $S$ of length $k$ and a target element $z$ at random. Table~\ref{table:comput} lists performance data obtained by applying our Pollard-$\rho$ algorithm to various groups~$G$ and sequences $S$ of densities $d=k/\log_2 n$ ranging from just under~$2$ to slightly more than~$4$. Each row compares expected values with actual results that are averages over at least $10^3$ runs. The parameter $c$ counts the number of collisions $\phi^{(i+j)}(w)=\phi^{(j)}(w)$ that were needed for a run of the algorithm to obtain a short product representation. Typically $c$ is greater than $1$ because not every collision yields a short product representation. The parameter~$\rho_\text{tot}$ is the sum of $\rho=i+j$ over the $c$ collisions required, and represents a lower bound on the number of times the map $\phi$ was evaluated. With efficient collision detection, the actual number is very close to~$\rho_\text{tot}$ (using the method of distinguished points we were able to stay within $1\%$). \begin{table}[p] \begin{center} \Small \begin{tabular}{@{}lrrrcrrcrr@{}} &&&&&\multicolumn{2}{c}{expected}&&\multicolumn{2}{c}{observed}\\ \cline{6-7}\cline{9-10} $G$&$\log_2 n$& $k$&$d$&&$c$&\hspace{24pt}$\rho_\text{tot}$&&$c$&\hspace{24pt}$\rho_\text{tot}$\\[3pt] \hline\\[-5pt] $E/\mathbb F_{2^{20}+7}$ & 20.00& 40& 2.00&& 3.00& 3144&& 3.00& 3162\\ && 60& 3.00&& 2.00& 2568&& 2.01& 2581\\ && 80& 4.00&& 2.00& 2567&& 2.01& 2565\\[1pt] $E/\mathbb F_{2^{24}+43}$ & 24.00& 48& 2.00&& 3.00& 12577&& 3.02& 12790\\ && 72& 3.00&& 2.00& 10269&& 2.03& 10381\\ && 96& 4.00&& 2.00& 10268&& 2.00& 10257\\[1pt] $E/\mathbb F_{2^{28}+3}$ & 28.00& 56& 2.00&& 3.00& 50300&& 2.95& 49371\\ && 84& 3.00&& 2.00& 41070&& 2.02& 41837\\ &&112& 4.00&& 2.00& 41069&& 1.98& 40508\\[1pt] $E/\mathbb F_{2^{32}+15}$ & 32.00& 64& 2.00&& 3.00& 201196&& 3.06& 205228\\ && 96& 3.00&& 2.00& 164276&& 1.96& 160626\\ &&128& 4.00&& 2.00& 164276&& 2.04& 169595\\[1pt] $E/\mathbb F_{2^{36}+31}$ & 36.00& 72& 2.00&& 3.00& 804776&& 2.95& 796781\\ &&108& 3.00&& 2.00& 657097&& 2.00& 655846\\ &&144& 4.00&& 2.00& 657097&& 1.98& 657097\\[1pt] $E/\mathbb F_{2^{40}+15}$ & 40.00& 80& 2.00&& 3.00& 3219106&& 2.90& 3120102\\ &&120& 3.00&& 2.00& 2628390&& 1.97& 2604591\\ &&160& 4.00&& 2.00& 2628390&& 2.06& 2682827\\[3pt] $\cl{1-2^{40}}$ & 19.07& 40& 2.10&& 2.52& 2088&& 2.44& 2082\\ && 60& 3.15&& 2.00& 1859&& 2.02& 1845\\ && 80& 4.20&& 2.00& 1858&& 2.01& 1863\\[1pt] $\cl{1-2^{48}}$ & 23.66& 48& 2.03&& 2.79& 10800&& 2.75& 10662\\ && 72& 3.04&& 2.00& 9140&& 1.97& 8938\\ && 96& 4.06&& 2.00& 9140&& 1.99& 9079\\[1pt] $\cl{1-2^{56}}$ & 27.54& 56& 2.03&& 2.73& 40976&& 2.69& 40512\\ && 84& 3.05&& 2.00& 35076&& 2.06& 36756\\ &&112& 4.07&& 2.00& 35076&& 1.98& 35342\\[1pt] $\cl{1-2^{64}}$ & 30.91& 64& 2.07&& 2.47& 125233&& 2.59& 131651\\ && 96& 3.11&& 2.00& 112671&& 1.98& 111706\\ &&128& 4.14&& 2.00& 112671&& 1.99& 111187\\[1pt] $\cl{1-2^{72}}$ & 35.38& 72& 2.04&& 2.65& 609616&& 2.60& 598222\\ &&108& 3.05&& 2.00& 529634&& 2.00& 534639\\ &&144& 4.07&& 2.00& 529634&& 2.00& 532560\\[1pt] $\cl{1-2^{80}}$ & 39.59& 80& 2.02&& 2.76& 2680464&& 2.80& 2793750\\ &&120& 3.03&& 2.00& 2283831&& 2.01& 2318165\\ &&160& 4.04&& 2.00& 2283831&& 2.04& 2364724\\[3pt] $\gltwo{\mathbb F_{37}}$ & 20.80& 42& 2.02&& 2.87& 4053&& 2.84& 4063\\ && 62& 2.98&& 2.00& 3384&& 1.99& 3358\\ && 84& 4.04&& 2.00& 3384&& 1.97& 3388\\[1pt] $\gltwo{\mathbb F_{67}}$ & 24.24& 48& 1.98&& 3.18& 14087&& 3.08& 13804\\ && 72& 2.97&& 2.00& 11168&& 2.10& 11590\\ && 96& 3.96&& 2.00& 11167&& 2.01& 11167\\[1pt] $\gltwo{\mathbb F_{131}}$ & 28.12& 56& 1.99&& 3.09& 53251&& 3.03& 52070\\ && 84& 2.99&& 2.00& 42851&& 1.94& 42019\\ &&112& 3.98&& 2.00& 42851&& 1.98& 42146\\[1pt] $\gltwo{\mathbb F_{257}}$ & 32.02& 64& 2.00&& 3.01& 202769&& 3.03& 204827\\ && 96& 3.00&& 2.00& 165237&& 2.02& 165742\\ &&128& 4.00&& 2.00& 165237&& 2.00& 165619\\[1pt] $\gltwo{\mathbb F_{511}}$ & 36.10& 72& 1.99&& 3.07& 842191&& 3.18& 886141\\ &&108& 2.99&& 2.00& 679748&& 1.97& 668416\\ &&144& 3.99&& 2.00& 679747&& 2.04& 703877\\[1pt] $\gltwo{\mathbb F_{1031}}$ & 40.04& 80& 2.00&& 3.03& 3276128&& 2.99& 3243562\\ &&120& 3.00&& 2.00& 2663155&& 2.02& 2677122\\ &&160& 4.00&& 2.00& 2663154&& 2.08& 2708512\\[3pt] \end{tabular} \end{center} \caption{Comparison of expected vs. observed values on various groups.} \label{table:comput} \end{table} The expected values of $c$ and $\rho_\text{tot}$ listed in Table~\ref{table:comput} were computed under the heuristic assumption that $\eta:G\to\mathcal C$ and $\pi:\mathcal C\to G$ are both random functions. This implies that while iterating $\phi$ we are effectively performing simultaneous independent random walks on $G$ and $\mathcal C$. Let $X$ and $Y$ be independent random variables for the number of steps these walks take before reaching a collision, respectively. The probability that $\pi(s)=\pi(t)$ in Step~5 is $P(X \le Y)$, and the algorithm then proceeds to find a short product representation with probability $1/2$. Using the probability density $u\exp(-u^2/2)du$ of $X/\sqrt{\#G}$ and $Y/\sqrt{\#\mathcal C}$, we find \[ \Exp{c} = 2/{P(X\le Y)} = 2(1+r), \] where $r=\#G/\#\mathcal C$. One may also compute \[ \Exp{\rho_\text{tot}} = \Exp{c}\Exp{\min(X,Y)} = \sqrt{2\pi n(1+r)}. \] For $d > 2$, we have $r\approx 0$ for large $n$, so that $\Exp{c}\approx 2$ and $\Exp{\rho_\text{tot}}\approx \sqrt{2\pi n}$. For $d=2$, we have $\Exp{c}=3$ and $\Exp{\rho_\text{tot}}=\sqrt{3\pi n}$ (when $k$ is even). For $d < 2$, the value of $\Exp{c}$ increases with $n$ and we have $\Exp{\rho_\text{tot}}=O(n^{(4-d)/4})$. \medskip In addition to the tests summarized in Table~\ref{table:comput}, we applied our low memory algorithm to some larger problems that would be quite difficult to address with the baby-step giant-step method. Our first large test used $G=E(\mathbb F_p)$ with $p=2^{80}+13$, which is a cyclic group of order $n=p+1+1475321552477$, and the sequence $S=(P_1,\ldots,P_{k})$ with points $P_i$ defined as above with $k=200$, which gives $d\approx 2.5$. Our target element was $z=P_{201}$ with $x$-coordinate $391$. The computation was run in parallel on $32$ cores (3.0~GHz AMD Phenom~II), using the distinguished points method.\footnote{In this parallel setting we may have collisions between two distinct walks (a $\lambda$-collision), or a single walk may collide with itself (a $\rho$-collision). Both types are useful.} The second collision yielded a short product representation after evaluating the map~$\phi$ a total of $1480862431620 \approx 1.35\sqrt{n}$ times. After precomputing $655360$ partial products (as discussed in Section~\ref{sec:analysis}), each evaluation of $\phi$ used $5$ group operations, compared to an average of $50$ without precomputation, and this required just $10$ megabytes of memory. The entire computation used approximately $140$~days of CPU time, and the elapsed time was about $4$~days. We obtained a short product representation for $z$ as the sum of $67$ points $P_i$ with $x$-coordinates less than $391$. In hexadecimal notation, the bit-string that identifies the corresponding subsequence of $S$ is: \begin{center} \texttt{542ab7d1f505bdaccdbeb6c2e92180d5f38a20493d60f031c1} \end{center} \medskip Our second large test used the group $G=\cl{1-2^{160}}$, which is isomorphic to \[ (\mathbb Z/2\mathbb Z)^{8} \times \mathbb Z/4\mathbb Z \times \mathbb Z/8\mathbb Z \times \mathbb Z/80894875660895214584\mathbb Z, \] see~\cite[Table B.4]{sutherland:thesis}. We used the sequence $S_k$ with $k=200$, and chose the target $z=[\alpha_{201}]$ with $\operatorname{N}(\alpha_{201})=2671$. We ran the computation in parallel on $48$ cores, and needed $3$ collisions to obtain a short product representation, which involved a total of $2856153808020\approx 3.51\sqrt{n}$ evaluations of $\phi$. As in the first test, we precomputed $655360$ partial products so that each evaluation of $\phi$ used $5$ group operations. Approximately $900$ days of CPU time were used (the group operation in $\cl{D}$ is slower than in the group $E(\mathbb F_p)$ used in our first example). We obtained a representative for the ideal class $z$ as the product of $106$ ideals with prime norms less than $2671$. The bit-string that encodes the corresponding subsequence of $S_k$ is: \begin{center} \texttt{5cf854598d6059f607c6f17b8fb56314e87314bee7df9164cd} \end{center} \section*{Acknowledgments} The authors are indebted to Andrew Shallue for his kind help and advice in putting our result in the context of subset sum problems, and to Steven Galbraith for his useful feedback on an early draft of this paper. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,580
\section{Introduction} \label{s:intro} The interstellar medium (ISM) is pervaded with magnetic fields of energy similar to other ISM species, generated by small-scale and large-scale ($\alpha-\Omega$) dynamo processes and transported with the bulk motion of interstellar plasma (Beck \cite{beck16}). Observational evidence suggests that magnetic fields in galaxies play an important role in regulating the ISM by confining cosmic ray electrons (Berezinskii et al.~\cite{berezinskii90}) and providing vertical support to the interstellar gas (Fletcher \& Shukurov~\cite{fletcher01}), and regulating angular momentum transfer in gas clouds that eventually collapse to form stars (Zweibel \& Heiles~\cite{zweibel97}). A study of individual nearby galaxies provides us with data on topology and strength of magnetic fields in various galactic environments. Revealing statistical relations between the various observational parameters to describe the galaxies and the magnetic field strength is a helpful tool in recognising, modelling, and understanding the impact of various physical processes involved in the formation and evolution of magnetic fields in the galaxies. There are objective obstacles encountered in such studies, like difficulties in observing optically and radio-weak dwarf galaxies and distant protogalaxies. For example, a systematic study of low-mass galaxies in the Local Group revealed surprisingly little information concerning magnetic fields in these objects as only three out of 12 dwarfs were detected in the radio domain (Chy\.zy et al.~\cite{chyzy11}). The results obtained indicated that magnetic fields in the dwarf galaxies are rather weak, with a mean value of total field strength of only $4\,\mu$G. Basing on the radio-detected low-mass galaxies (from the Local Group as well as from outside it) a power-law relation of the magnetic field strength and the surface density of star formation rate (SSFR) with an index of $0.30\pm 0.04$ was determined. Some other relationships of magnetic fields with galaxy parameters were also found. To what extent the relationships obtained for the low-mass galaxies remain valid also for the massive galaxies and starbursts, is not known. Recently, Tabatabaei et al. (\cite{taba16}) indicated that the large-scale (ordered) magnetic field in a sample of 26 galaxies is proportional to their rotational speed. The enhanced field in this case could be due to gas compression and shearing flows in fast rotating systems. In another work, Van Eck et al.~(\cite{vaneck15}) used 20 well-observed nearby galaxies to present a statistically important relation of the total magnetic field strength with the SSFR (with the power-law index $n=0.19\pm 0.03$) as well as with the density of molecular gas ($n=0.21\pm 0.04$). The magnetic pitch angle appeared to be associated with the total gas density, star formation rate, and strength of the axisymmetric component of the large-scale part of magnetic field. A steeper relation between the total field and the SSFR was found by Heesen et al. (\cite{heesen14}) for 17 galaxies, containing two dwarfs. Performing similar studies for a much larger sample of different galaxies is much needed. In order to investigate importance of various correlations of observed parameters of galaxies, Disney et al. (\cite{disney08}) used the principal component analysis (PCA) to statistically analyse a sample of 200 galaxies, showing that the galaxies can be described in a much simpler way than suggested by the hierarchical structure formation theory and are actually controlled by a small number of dominating parameters. In later studies, Li \& Mao (\cite{li13}) reproduced the results of Disney et al. for a sample of 2000 SDSS galaxies and used PCA to construct parameters to better differentiate the galaxies than the original observables, like colour, stellar age, or stellar mass. They also proved that the galaxy environment did not affect galaxy morphology to a greater extent, while significantly changing galactic colours. In this paper, we explore how the statistical relationships determined for the low-mass objects concern the general population of galaxies, probing relations of magnetic field with a number of properties describing galaxies in a sample of 55 objects. Our sample includes faint dwarf galaxies, normal spirals, and several massive starbursts, in order to cover a wide range of star formation processes and to find out possible interrelations for all the objects. We use our radio observations of low-mass objects and acquire information on the other galaxies from the available publications. The sample's size allows us to inspect magnetic fields across the Hubble sequence. The radio-faintest dwarf galaxies, for which stacking experiments of their radio maps were performed, are also analysed. The investigation involves a statistical analysis of the galaxy sample basing on two methods, PCA and regression modelling. \section{Galaxy sample} \label{s:sample} \subsection{Low-mass objects} \label{s:lowmasssample} In our low-mass sample, we included low-mass galaxies from our radio observations made with the 100-m Effelsberg telescope: three dwarf galaxies from Chy\.zy et al. (\cite{chyzy11}) observed at 2.64\,GHz (NGC\,6822, IC\,10, IC\,1613), five low-mass, Magellanic-type galaxies observed at 4.85\,GHz and/or 8.35\,GHz (NGC\,3239, NGC\,4027, NGC\,4618, NGC\,5204, UGC\,11861), peculiar, `pure disk' objects (NGC\,2976 and NGC\,4605) (Jurusik et al. \cite{jurusik14}), as well as three galaxies (NGC\,4236, NGC\,4656, IC\,2574) from Chy\.zy et al. \cite{chyzy07}. For all these galaxies we calculated the total magnetic field strength $B$ assuming energy equipartition between magnetic fields and cosmic rays (Beck \& Krause \cite{beck05}). The separation of thermal emission from the radio total flux was achieved with the help of H$\alpha$ fluxes. In the case of Magellanic and peculiar objects, we corrected the H$\alpha$ fluxes for dust attenuation using information on the infrared (dust) emission (see Jurusik et al. \cite{jurusik14}). The sizes and masses of these objects are between the dwarf and typical spiral galaxies. In order to have the best possible representation of radio-faint star-forming dwarf galaxies, we included into the sample UGC\,5456 and analysed the `common' sample of dwarfs from the stack experiment from Roychowdhury \& Chengalur (\cite{roy12}), while performing a similar stack experiments for the dwarf galaxies of the Local Group which went undetected in the work of Chy\.zy et al. (\cite{chyzy11}). Using NVSS (1.4\,GHz) maps for these nine dwarfs from the Local Group (Aquarius, GR 8, WLM, LGS 3, SagDIG, Sextant A, Sextant B, Leo A, and Pegasus), we were able to estimate only the upper limit of $B=5\pm 1\,\mu$G. Presumably, the number of our stacked objects was too small for the signal to be detected. Our Effelsberg observations (Chy\.zy et al. \cite{chyzy11}) at 2.64\,GHz provided a better estimation of this upper limit with $B < 3.8\pm0.6\,\mu$G. We also added five galaxies from the available work: LMC, SMC, NGC\,4449, NGC\,1569, NGC\,4214. The sources of the data for these objects are given in Table~\ref{t:basic}. \subsection{Massive galaxies} \label{s:massivesample} Our sample contained well-researched normal spiral galaxies for which we were able to find proper data in the literature. To work with the most uniform dataset possible, we used radio continuum data from the WSRT survey of SINGS galaxies (Braun et al. \cite{braun07}) to estimate the equipartition magnetic field strength for 14 objects from the nonthermal emission, taking the thermal fractions from Heesen et al. (\cite{heesen14}) and the galaxy inclination values from HyperLeda or NED. For other 14 galaxies we used estimations of $B$ (for the entire galaxies) from the compilation of Van Eck (\cite{vaneck15}). We also added seven well-known spirals from other studies (Table~\ref{t:basic}). Our sample involved massive starbursts (NGC\,253, M\,81) as well as luminous infrared radio galaxies (LIRGs: NGC\,3256 and Arp 220). \subsection{Construction of extensive and intensive parameters} \label{s:construction} For each galaxy in the sample, we searched the literature for information on their global properties: morphological (Hubble) type $T$, inclination $i$, distance $D$, the optical angular radius, which was transformed to the linear one $R$, rotational velocity $V$, global SFR, the total \ion{H}{i} mass $M_\mathrm{HI}$, the total mass of molecular gas $M_\mathrm{H2}$, the near-infrared luminosity $LK$ in $K_s$ band, which is related to the total galactic stellar mass. We also calculated `tentative' total masses of galaxies, estimating them from the formula: $M\propto R\,V^2$. The parameters: SFR, $LK$, $M_{\mathrm{HI}}$, $M_{\mathrm{H2}}$, $M$, $R$ are all extensive properties of galaxies and depend on the object size: splitting a galaxy in half would result in decreasing the values of these parameters to half of the original ones. The mean magnetic field strength, calculated as an average value over the galaxy, is directly related to the volume density of magnetic energy and calculated from the radio emission, taking into account the synchrotron pathlength. This is an intensive property, independent of galaxy size. Therefore, we constructed other parameters describing the intensive properties of galaxies, free from the influence of their sizes and masses (see e.g. Lara-L{\'o}pez et al. \cite{lara13}). The analysis which global or intensive parameters are mainly related to the magnetic field, and which are less important is one of the purposes of our analysis. We constructed the following set of intensive parameters: the (mean) surface density of star formation rate $\mathrm{SSFR=SFR}/A$, the density of hydrogen gas $SM_\mathrm{HI}=M_\mathrm{HI}/A$, the density of H$_2$ gas $SM_\mathrm{H2}=M_\mathrm{H2}/A$, near-infrared surface brightness $SLK=LK/A$, where $A$ is the observed surface area of the galaxy. Moreover, we calculated the star formation efficiency with reference to the neutral gas $\mathrm{SFE=SFR}/M_\mathrm{HI}$ and the similar efficiency for the H$_2$ gas $\mathrm{SFE_{H2}=SFR}/M_\mathrm{H2}$. The intensive parameters involving the magnetic field strength and the surface densities are derived for the entire galaxies using their optical or radio extents. We note that in some literature (e.g. Thompson et al. \cite{thompson06}) the magnetic field strength and gas densities are calculated for restricted regions of strong star formation, which obviously yields different estimates (e.g. in the extreme cases of M\,82 and Arp\,220, the values of $B$ obtained by us are by an order of magnitude lower than those in Thompson et al. \cite{thompson06} calculated for compact starbursts). The main properties of all 55 galaxies are summarised in Table \ref{t:basic}. \section{Results} \label{s:results} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{f1.pdf} \caption{Biplot obtained from PCA of all galaxy parameters, showing the positions of individual galaxies and the directions of the original variables (arrows) as projected into the plane of the first two PCs. The horizontal axis is the most varying direction of the data-set. The positions of galaxies were \emph{scaled down} by the standard deviation of the corresponding PCs multiplied by the square root of the number of observations (bottom and left-hand axes), while the vectors were \emph{scaled up} by the same values (top and right-hand axes). } \label{f:biplot} \end{figure} The investigation of our galaxy sample is performed by applying two statistical methods: PCA and two-dimensional regression. \subsection{Principal component analysis} \label{s:pca} \begin{table*} \caption{Eigenvalues, variances explained by the principal components, and eigenvectors from PCA of global parameters and $B$ (see Sect.~\ref{s:pca}). The principal components are denoted as PC1 to PC8. Eigenvector components with small ($<$0.1) values indicating little contributions to the principal components have been left blank in the table. } \begin{center} \begin{tabular}{lcccccccc} \hline \hline & PC1 & PC2 & PC3 & PC4 & PC5 & PC6 & PC7 & PC8\\ \hline Eigenvalues & 5.96 & 1.15 & 0.39 & 0.23 & 0.15 & 0.08 & 0.04 & 0.00\\ \hline Var. explained & 0.745 & 0.144 & 0.048 & 0.029 & 0.018 & 0.010 & 0.005 & 0.000\\ \hline $B$ & -0.215& 0.767& & 0.376& & -0.303& -0.356& \\ SFR & -0.370& 0.324& -0.263& -0.117& -0.104& & 0.815& \\ $M_{\mathrm{HI}}$& -0.342& -0.305& -0.537& 0.427& 0.480& 0.283& & \\ $M_{\mathrm{H2}}$& -0.368& 0.168& & -0.768& 0.354& 0.160& -0.309& \\ $LK$ & -0.388& & 0.187& 0.137& -0.580& 0.647& -0.196& \\ $R$ & -0.360& -0.336& -0.328& -0.129& -0.446& -0.534& -0.203& -0.329 \\ $V$ & -0.365& -0.134& 0.644& 0.183& 0.308& -0.110& 0.163& -0.517 \\ $M$ & -0.389& -0.228& 0.285& & & -0.294& & 0.790 \\ \hline \end{tabular} \end{center} \label{t:pca1} \end{table*} \begin{table*} \caption{Eigenvalues, variances explained by the principal components, and eigenvectors of PCA of intensive parameters and $B$ (see Sect.~\ref{s:pca}). The principal components are denoted as PC1 to PC7. Eigenvector components with small ($<$0.1) values indicating little contributions to the principal components have been left blank in the table. } \begin{center} \begin{tabular}{lccccccc} \hline \hline & PC1 & PC2 & PC3 & PC4 & PC5 & PC6 & PC7 \\ \hline Eigenvalues & 3.83 & 1.52 & 0.96 & 0.47 & 0.20 & 0.02 & 0.00\\ \hline Var. explained &0.547 & 0.217 & 0.136 & 0.068 & 0.029 & 0.004 & 0.000\\ \hline $B$ & -0.457 &-0.132 & 0.123 & 0.153 & 0.854 & & \\ SSFR & -0.480 &-0.192 & & 0.251 &-0.253 &-0.775 & \\ SFE & -0.471 & & 0.320 & 0.175 &-0.354 & 0.414 & 0.585 \\ SFE$_{\mathrm{H2}}$ & &-0.723 & 0.455 & &-0.201& 0.208& -0.432 \\ $SM_{\mathrm{HI}}$ & &-0.568 &-0.724 & 0.102 & & 0.226 & 0.301 \\ $SM_{\mathrm{H2}}$ & -0.428 & 0.302 &-0.369 & 0.206 &-0.198 & 0.357 &-0.617 \\ $SLK$ & -0.393 & & &-0.911 & & & \\ \hline \end{tabular} \end{center} \label{t:pca2} \end{table*} PCA is an exploratory technique useful for finding patterns or structure in a multivariate dataset. This method combines variables (parameters) that redundantly measure the same property, and reduces the importance of variables that contribute little information to the data. It is also useful as a more general statistical tool for describing and understanding the data structure. PCA models the covariance or correlation matrix of the data to find relationships to best account for the data variance. As a result, it produces a number of new, statistically independent variables, called the principal components (PCs), which are linear combination of the original variables. The problem of determining new variables to maximize information (data variance) is equivalent to finding eigenvectors and eigenvalues of the data covariance (or correlation) matrix. The i-th PC is the line in the data parameter space that follows the eigenvector associated with the i-th largest eigenvalue measuring the variance in the direction of the i-th PC. Therefore the first PC is aligned with the direction of maximum variance in the entire dataset, the second one shows the highest variability for all directions orthogonal to the first PC, and so forth. The number of derived PCs equals the number of the original parameters considered in the analysis and the original observations can be expressed in the new coordinates (by projecting onto the PCs). We performed such PCA basing on the correlation matrix of logarithmised parameters describing our sample of galaxies. In our first PCA approach, we analysed only the global parameters of galaxies (SFR, $M_{\mathrm{HI}}$, $M_{\mathrm{H2}}$, $LK$, $R$, $V$, and $M$). It turned out that all the parameters are correlated, allowing for descripting the entire sample by just one principal component (PC1), which can account for 82\% of variance in the galaxy parameters. All the global parameters contribute to PC1 to roughly the same extent and with the same sign. The second and next PCs have eigenvalues smaller than 1 and are considered insignificant. \begin{table*} \caption{Parameters of statistical fits.} \begin{center} \begin{tabular}{lcccc} \hline \hline Relation & n[M(Y/X)] & n(Bisector) & $\rho$/P-value$^a$ & N \\ \hline $B\propto \mathrm{SFR}^n$ & $0.21\pm 0.02$ & $0.28\pm 0.02$ & 0.68/0.00 & 55\\ $B\propto (M_\mathrm{HI})^n$ & $0.08\pm 0.05$ & $0.63\pm 0.17$ & 0.18/0.18 & 55\\ $B\propto (M_\mathrm{H2})^n$ & $0.15\pm 0.03$ & $0.27\pm 0.03$ & 0.54/0.00 & 48\\ $B\propto (M_\mathrm{gas})^n$ & $0.16\pm 0.04$ & $0.43\pm 0.08$ & 0.38/0.01 & 48\\ $B\propto {LK}^n$ & $0.13\pm 0.03$ & $0.26\pm 0.04$ & 0.49/0.00 & 55\\ $B\propto R^n$ & $0.11\pm 0.08$ & $0.83\pm 0.11$ & 0.16/0.24 & 55\\ $B\propto V^n$ & $0.31\pm 0.09$ & $0.82\pm 0.07$ & 0.35/0.01 & 55\\ $B\propto M\propto (V^2R)^n$ & $0.09\pm 0.03$ & $0.37\pm 0.08$ & 0.30/0.02 & 55\\ \hline $B\propto \mathrm{SSFR}^n$ & $0.33\pm 0.03$ & $0.41\pm 0.03$ & 0.78/0.00 & 55\\ $B\propto (\mathrm{SSFR_{cor}})^n$ & $0.31\pm 0.03$ & $0.39\pm 0.03$ & 0.80/0.00 & 55 \\ $B\propto (SM_\mathrm{HI})^n$ & $-0.01 \pm 0.09 $ & $-0.96\pm 0.09$ & -0.03/0.85 & 55\\ $B\propto (SM_\mathrm{H2})^n$ & $0.23\pm 0.04$ & $0.37\pm 0.04$ & 0.65/0.00 & 48\\ $B\propto (SM_\mathrm{gas})^n$ & $0.41\pm 0.10$ & $0.82\pm 0.09$ & 0.52/0.00 & 48\\ $B\propto {SLK}^n$ & $0.21\pm 0.04$ & $0.39\pm 0.05$ & 0.60/0.00 & 55\\ $B\propto \mathrm{SFE}^n$ & $0.30\pm 0.03$ & $0.37\pm 0.03$ & 0.75/0.00 & 55\\ $B\propto \left({\mathrm{SFE_{H2}}}\right)^n$ & $0.06\pm 0.07$ & $0.77\pm 0.12$ & 0.07/0.61 & 48\\ \hline $\mathrm{SFR}\propto (M_\mathrm{HI})^n$ & $0.93\pm 0.13$ & $1.37\pm 0.13$ & 0.65/0.00 & 55\\ $\mathrm{SFR}\propto (M_\mathrm{H2})^n$ & $0.77\pm 0.05$ & $0.86\pm 0.07$ & 0.88/0.00 & 48\\ $\mathrm{SFR}\propto M^n$ & $0.69\pm 0.07$ & $0.94\pm 0.09$ & 0.71/0.00 & 55\\ $\mathrm{SFR}\propto {LK}^n$ & $0.72\pm 0.05$ & $0.85\pm 0.07$ & 0.80/0.00 & 55\\ $\mathrm{SSFR}\propto (SM_\mathrm{HI})^n$ & $0.33\pm 0.23$ & $1.19\pm 0.20$ & 0.15/0.29 & 55\\ $\mathrm{SSFR}\propto (SM_\mathrm{HI})^n$ restr.$^b$ & $0.54\pm 0.21$ & $1.30\pm 0.15$ & 0.30/0.03 & 51\\ $\mathrm{SSFR}\propto (SM_\mathrm{H2})^n$ & $0.67\pm 0.08$ & $0.87\pm 0.10$ & 0.78/0.00 & 48\\ $\mathrm{SSFR}\propto (SM_\mathrm{H2})^n$ restr.$^c$ & $0.96\pm 0.20$ & $1.49\pm 0.18$ & 0.63/0.00 & 27\\ $\mathrm{SSFR}\propto (SM_\mathrm{gas})^n$ & $1.39\pm 0.20$ & $1.94\pm 0.20$ & 0.70/0.00 & 48\\ $\mathrm{SSFR}\propto {SLK}^n$ & $0.56\pm 0.08$ & $0.90\pm 0.10$ & 0.61/0.00 & 55\\ \hline \hline \end{tabular} \end{center} Notes. $^{(a)}$ -- large P-values mean a low confidence level to reject the hypothesis that the data are not correlated; $^{(b)}$ -- restricted so as to not include massive starburst/LIRGs; $^{(c)}$ -- restricted to $(3 < SM_\mathrm{H2} < 50)$\,$M_{\sun}\,\mathrm{pc}^{-2}$ \label{t:corel} \end{table*} Additionally, introducing $B$ to the global parameters in the subsequent PCA distributes the information on galaxies essentially into two PC components. This is illustrated in Table~\ref{t:pca1}, where the first row gives the eigenvalues that measure the variance in the direction of associated PCs. The sum of eigenvalues gives the total variance in the data, which in our approach is just the number of PCs, as the original variables were standarised. The second row shows the proportion of eigenvalues to the total data variance and determines how big a fraction of the total variance is accounted for by the subsequent PCs. The next part of the table shows in respective columns the components of eigenvectors associated with individual PCs, which can be understood as to what extent each original variable contributed to building a PC. On examining the values presented in Table~\ref{t:pca1} one can see that PC1 contains mostly information from the global parameters, as in the previous analysis, but involves also a contribution from some (systematic) part of magnetic field $B$, which is less than in the case of the global parameters. In contrast, most of the information about magnetism is independent of the other parameters and constitutes the next component, PC2. Both PCs account for 75\% and 14\% of the variability in the data, respectively, which suggests that in this description of galaxies, the global parameters carry much more information than the magnetic field strength. In our third approach to PCA, we analysed the intensive parameters (SSFR, SFE, SFE$_{\mathrm{H2}}$, $SM_{\mathrm{HI}}$, $SM_{\mathrm{H2}}$, $SLK$). Here, only four (SSFR, SFE, $SM_{\mathrm{H2}}$, $SLK$) out of six variables significantly contribute to PC1, which accounts for 51\% of the population variability. The other parameters, SFE$_{\mathrm{H2}}$ and $SM_{\mathrm{HI}}$, dominate the components PC2 and PC3, respectively. Subsequently, we added information about $B$, which passed almost completely into PC1, where it constituted a factor comparable to the other intensive parameters (see Table~\ref{t:pca2}). The next two primary components are dominated again by SFE$_{\mathrm{H2}}$ and $SM_{\mathrm{HI}}$. The first three components combined describe 91\% of the data variance. Contrary to PCA performed on global parameters the magnetic field thus appears equally important as SSFR, $SM_{\mathrm{H2}}$, and $SLK$, in accounting for the intensive properties of galaxies. In the final analysis, we took into account all the intensive parameters, including $B$, and the global ones. From the comparison of eigenvector components, it is clear that the strength of magnetic field is connected mainly to the intensive parameters, while the global parameters have only weak relationships with $B$. This is apparent in the correlation vector diagram (biplot in Fig.~\ref{f:biplot}), which shows two-dimensional projections of each data point onto the first two PCs and the components of eigenvectors (shown as arrows) representing the original variables as projected into the PC1-PC2 plane. The elements of the vectors correspond to the correlations of each variable with each PC. As the cosines of the angles between the different vectors are a measure of correlation between the respective variables, the vectors pointing in the same direction represent the perfectly correlated variables, while the perpendicular ones indicate a complete lack of correlation. In our plot the vector corresponding to $B$ is surrounded solely by the vectors of intensive parameters, which suggests that they are closely related. The angles between vectors representing the intensive parameters (including $B$) and the global ones are large, indicating just weak associations. Galaxies appear to be well grouped in the PC1-PC2 plane (Fig.~\ref{f:biplot}). In particular, the low-mass objects acquire the highest value of the component PC1 and are located to the right on the graph. More massive objects exhibiting the strongest star formation (LIRGs, M\,82) occupy the bottom-left part of the chart and have a small value of the PC1. The starbursting dwarfs NGC\,1569 and IC\,10 lie between them, while the normal spirals are on the other side of the plot. \subsection{Regressions} \label{s:regressions} \begin{figure*}[t] \centering \includegraphics[clip,width=0.33\textwidth]{SSFR-B.pdf} \includegraphics[clip,width=0.33\textwidth]{SFR-B.pdf} \includegraphics[clip,width=0.33\textwidth]{MHI-SFR.pdf} \includegraphics[clip,width=0.33\textwidth]{SMHI-B.pdf} \includegraphics[clip,width=0.33\textwidth]{SMH2-B.pdf} \includegraphics[clip,width=0.33\textwidth]{SMgas-B.pdf} \includegraphics[clip,width=0.33\textwidth]{SLKTOT-B.pdf} \includegraphics[clip,width=0.33\textwidth]{SMH2-SSFR.pdf} \includegraphics[clip,width=0.33\textwidth]{V-B.pdf} \caption{Relations between various galaxy parameters for sample galaxies of different categories: dwarfs -- rectangles, Magellanic and peculiar low-mass galaxies -- triangles, spiral galaxies -- circles, massive starbursts and LIRGs -- diamonds. The solid line represents the M-estimation of Y/X regression and the dashed line denotes the bisector fit. } \label{f:corel} \end{figure*} The influence of galaxy extensive and intensive properties on magnetic field can be quantitatively assessed by regression methods and expressed in functional form. Following some earlier attempts (e.g. Chy\.zy et al.~\cite{chyzy11}, Heesen et al. \cite{heesen14}, Van Eck \cite{vaneck15}, Tabatabaei et al. \cite{taba16}, Tabatabaei et al. \cite{taba17}), we approximated the data using power-law functions, which correspond to linear fits after converting to the logarithmic scale. To remove possible data outliers, we used a robust M-estimation of two-dimensional (Y/X) regression by the means of iterated re-weighted least squares. The method was used instead of the ordinary (Y/X) least squares regression, but actually in all cases the results obtained from both the methods were very similar. We also applied bisector regression, that treats the variables in a symmetrical way. For finding the strength of relationship between the parameters, the Spearman's rank correlation coefficient was determined. The most significant correlation found in the relationship of magnetic field with the galaxy parameters is the relation $B$-SSFR ($\rho=0.78$, Table~\ref{t:corel} and Fig.~\ref{f:corel}a). The fitted index of the power law $n=0.33\pm 0.03$ is almost identical with the one obtained for the dwarf irregular galaxies only: $0.30\pm 0.04$ (Chy\.zy et al. \cite{chyzy11}). The magnetic field is also associated with the global SFR but to a smaller extent ($\rho=0.68$ , $n=0.21\pm 0.02$). We found that the total magnetic field strength $B$ is significantly correlated ($\rho=0.65$) with the surface density of molecular (H$_2$) gas but not correlated with neutral gas $SM_{\mathrm{HI}}$ ($\rho=-0.03$) (see Figs.~\ref{f:corel}d-e). As $B$ is closely associated with SSFR, the difference could presumably have arisen from the observed different linking of SSFR with the density of neutral gas ($\rho=0.15$) and of molecular gas ($\rho=0.78$) (see Fig.~\ref{f:corel}h). We checked that $B-SM_{\mathrm{HI}}$ and $B-SM_{\mathrm{H2}}$ relationships for our sample are similar to those observed for the sample of Van Eck et al.~(\cite{vaneck15}). In our previous work, we found a distinct $B$-$SM_{\mathrm{HI}}$ relation for a group of low-mass (dwarf) galaxies (Chy\.zy et al. \cite{chyzy11}). This makes for a remarkable difference with our current study. We think that this can be possibly related to galactic mass (or SFR), since the more massive galaxies we took into consideration, the smaller $B-SM_{\mathrm{HI}}$ correlation was observed. When we restricted our sample so as to not include massive starbursts, just a weak correlation emerged (Table~\ref{t:corel}). The work of Bigiel et al. (\cite{bigiel08}) can further support this view as it shows that $SSFR-SM_{\mathrm{H2}}$ relation for {H}{I} dominated dwarf irregular galaxies resemble the coupling found in outer parts of spiral galaxies, but galaxies with higher fraction of H$_2$ gas or inner parts of spiral galaxies can show slightly different relationship. Bigiel et al. received $n=1.0\pm 0.2$ for $SSFR-SM_{\mathrm{H2}}$ relation for galaxies in the regime where ${SM}_\mathrm{H2}=3-50 \,M_{\sun}\,\mathrm{pc}^{-2}$. When our sample was restricted to this range we obtain similar relation with an index of $0.96\pm0.20$. The relation of $B$ with the total gas density ($SM_\mathrm{gas}=SM_{\mathrm{HI}}+SM_{\mathrm{H2}}$) is also statistically significant for our sample showing a power-law index $n=0.41\pm0.10$ and $\rho=0.52$. We note that within regions in M\,31 $B$ was found even to be best coupled to the volume density of the total gas rather than to a specific component (Berhhuijsen et al. \cite{berkhuijsen93}). For more H$_2$ dominated galaxies we expect $B-SM_{\mathrm{H2}}$ relation to be the strongest one due to a clear, monotonic $SSFR-SM_{\mathrm{H2}}$ relationship shown by Bigiel et al. (\cite{bigiel08}). For our sample the magnetic field does not show any significant relation with the star formation efficiency based on H$_2$ ($\rho=0.07$). We notice strong association of $B$ with the star formation efficiency based on neutral gas (SFE) with $\rho=0.75$, but this did not provide us with any new information. We explain this association as a result of the mentioned strong $B$-SSFR correlation and the lack of significant relationships between SSFR and $SM_{\mathrm{HI}}$ (Table~\ref{t:corel}). The comparison of the strength of correlation of $B$ with global $M_{\mathrm{HI}}$, $M_{\mathrm{H2}}$, and $M$, shows that $B$ is not closely connected with the total mass $M$, which is the largest source of gravitational force. Actually, the strongest relation occurs with the molecular mass - that part of the galactic mass which is most related with production of stars. Therefore, we interpret the dependence of $B$ on $M$ (as well as on $V$) as an indirect one, resulting from the $B$-SFR coupling and the observed connection of global SFR with the available total molecular mass in galaxies. The association we find between $B$ and $LK$ ($\rho=0.49$), a rough estimator of stellar mass in galaxies and stellar activity, may support this line of reasoning. We checked whether the galaxy inclination is related to any other parameters and whether it could have affected our results. The calculated correlation coefficient between the inclination and the other parameters turned out to be statistically non-significant. We then applied a simple correction for the inclination in calculating the surface density of the SFR: instead of the galaxy observed surface area, we scaled the SFR by the area of a circle with the radius equal to the galaxy major axis. We repeated the regression analysis for the magnetic field and thus obtained the surface density of the star formation rate $\mathrm{SSFR_{cor}}$. The fitted index of the power law $n=0.31\pm 0.03$ is very similar to the original one (Table~\ref{t:corel}), which proves once again that inclination does not change the calculated relationships by more than statistical uncertainties. \section{Discussion and conclusions} \label{s:discussion} The PCA allowed us to compare the significance of relations of $B$ with various galaxy parameters, demonstrating that the global galaxy parameters are all mutually correlated and can be represented by a single principal component. Thus our sample reproduces the result of Disney et al. (\cite{disney08}), who had used almost 200 galaxies (Sect.~\ref{s:intro}). According to our analysis, the values of magnetic field are not too closely related to the global parameters, hence the latter cannot be a major drivers of magnetic fields. Nevertheless, the PCA and regression analysis do reveal weak correlations of $B$ with the global parameters, for example, the global SFR (Sect.~\ref{s:regressions}). In order to probe these connections and spotting in Fig.\ref{f:corel}a that the locations of galaxies depend on their category, we constructed a graph of $B$ along the Hubble sequence (Fig.\ref{f:hubble}). There is a large diversity of observed strengths of magnetic field for almost each Hubble type. The maximum values of $B$ are not restricted by the morphological type and even dwarf galaxies (those which are in the starburst phase) are able to produce strong total magnetic fields. However, it can be noticed that the lower envelope of field strength varies with the type in a systematic way. Weaker fields appear exclusively in later Hubble types ($T>8$) and the mean strength as low as about $5\,\mu$G is not observed in the normal spiral galaxies. We suspect these differences are due to density waves, which in the typical spiral galaxies always force some minimal level of star forming activity and in turn, subsequent production of magnetic fields by the small-scale dynamo. We also notice relatively weak fields for early types of galaxies (Fig.~\ref{f:hubble}), although this part of the diagram requires more data to verify this observation. A systematic decrease of $B$ towards the early-type galaxies is expected: in the Sa ($T=1$) galaxies, massive stars form usually in small clusters, while in the Sc-d ($5\le T \le 7$) objects \ion{H}{II} associations containing hundreds or thousands of OB stars are found (Kennicutt \cite{kennicutt98b}). As the stellar activity modifies the structure and dynamics of ISM, we can suppose that magnetic field topologies and strengths are accordingly changed and weaker fields occur in more quiet ISM. We find that the closest relationship of $B$ is with SSFR ($\rho=0.78$), which is described by a power-law with an index $n=0.33\pm 0.03$. As this relation is in excellent correspondence to the one determined for low-mass galaxies alone ($0.30\pm 0.04$, Chy\.zy et al. \cite{chyzy11}) it shows that the processes of generating magnetic field in the dwarf and Magellanic-type galaxies are similar to those in the massive spirals. In the present analysis the statistical sample (of 55 objects) is several times larger than the previous one and not only supports but also even strengthens the results obtained from the Local Group dwarfs. This trend is observed over three orders of magnitude in SSFR for galaxies, while the global SFR spreads over more than four orders of magnitude. Also the three starburst galaxies with highest SSFR (Fig.~\ref{f:corel}a) fit the trend. Hence, we can reasonably suspect that the distant galaxies with extremely high SFR (like Ultra LIRGs) would also follow this relationship. Deep radio surveys, for example with LOFAR (Hardcastle et al. \cite{hardcastle16}), can potentially provide appropriate observational evidence. \begin{figure} \centering \includegraphics[clip,width=0.50\textwidth]{f3.pdf} \caption{Magnetic field strength $B$ along the Hubble sequence. Symbolic markers are the same as in Fig.~\ref{f:corel}. } \label{f:hubble} \end{figure} Our sample is large enough to statistically compare for the first time the production levels of magnetic fields in the spirals, dwarf and irregular galaxies having similar SSFR. The relevant data can be seen in the categorial plot of $B$ against SSFR in Fig.~\ref{f:corel}a. It appears that the spiral galaxies have slightly stronger fields than dwarfs (in agreement with Fig.~\ref{f:hubble}). Different galaxy mixes can thus lead to different power-law indices in $B$-SSFR relation, which may explain the slightly different results reported in previous published works (see e.g. Van Eck et al. \cite{vaneck15} and Sect.~\ref{s:intro}). In our sample, the total magnetic field is correlated with the density of cold molecular (H$_2$) gas but not with warm neutral (\ion{H}{I}) medium (Figs.~\ref{f:corel}d-e). This is supported by similar results obtained by Bigiel et al. (\cite{bigiel08}) and Van Eck et al.~(\cite{vaneck15}) for different galaxy samples. According to our work, a best-fit Schmidt law (SSFR-${SM}_\mathrm{gas}$) shows an exponent $n=1.39\pm 0.20$ (Table~\ref{t:corel}) whereas Kennicutt (\cite{kennicutt98a}) found $n=1.40\pm 0.15$ for actively star-forming galaxies. Considering the above we propose two possibilities to simply interpret the observed $B$-SSFR relation. According to the first idea this relation partly results from a tight correlation between radio luminosity $LR$ and the infrared luminosity which is closely connected to the global SFR. Modelling of this relation which can be described by a power-law with an index $\beta$ ($LR \propto SFR^\beta$) usually assumes proportionality of radio luminosity to the CRs production rate, which itself is proportional to the supernova rate and hence to the SFR. Different galaxy properties and environment may further involve other processes as CRs and dust-heating UV-photons escape or synchrotron emission from secondary CR electrons produced by interaction of CRs protons with dense molecular clouds. Assuming further energy equipartition between magnetic fields and CRs yields a formula for radio intensity $I \propto B^{3+\alpha}$ where $\alpha$ is the radio spectral index, which allow us to re-write the radio-infrared relation to the form: $B \propto \mathrm{SSFR}^{\beta/(3+\alpha)}$. The observed relation $B\propto \mathrm{SSFR}^{0.33}$ and typical value of $\alpha=0.9$ results in the radio-infrared relation with $\beta=1.29$. This value is in a good agreement with observations (see e.g. Heesen et al. \cite{heesen14}, Beck \cite{beck16}). The second interpretation of the $B$-SSFR relation we base on the SSFR-$SM_{\mathrm{gas}}$ coupling (the Schmidt law with the observed exponent $n=1.39\pm0.20$) which leads to $B \propto SM_{\mathrm{gas}}^{0.46}$. Then we assume turbulent magnetic field amplification, for example by a small-scale dynamo, which results in scaling of the magnetic energy with the turbulent energy of the gas: $B^2 \propto SM_\mathrm{gas} v^2$ where $v\approx10$\,km\,s$^{-1}$ is the turbulent gas velocity. This results in scaling $B\propto SM_{\mathrm{gas}}^{0.5}$ which well corresponds with the derived exponent 0.46 and the observed exponent $0.41\pm 0.10$ (Table~\ref{t:corel}). More detailed description of physical processes involved in amplification of magnetic fields by a small-scale dynamo by Schleicher \& Beck (\cite{schleicher13}) leads to the relationship ($B\propto \mathrm{SSFR}^{1/3}$) which is very similar to the observed one. The results from the stack experiment involving the radio-faint dwarf galaxies (the `common' sample, Sect.~\ref{s:lowmasssample}) from Roychowdhury \& Chengalur (\cite{roy12}), can also be compared with our $B$-SSFR relation. The value $B=1.4\,\mu$G and $\mathrm{SSFR}=9.8\times 10^{-4}\,\mathrm{M_{\sun}\, yr^{-1}\, kpc^2}$ locates these objects significantly ($\approx 1\,\mu$G) below the trend (Fig.~\ref{f:corel}a). This difference is not likely due to errors. The value of $B$ is lower than the magnetic field equivalent, due to inverse Compton losses of relativistic electrons in the cosmic microwave background. Hence, the strength estimated from the presumably reduced synchrotron emission can be undervalued. Additionally, at such low SSFR the turbulence injection timescale (or timescale of massive star formation) can become longer than the dissipation timescale of CR electrons and brake the equipartition between magnetic fields and CRs resulting in decrease in synchrotron emission and $B$ (see Schleicher \& Beck \cite{schleicher16}). In the case of faint, radio-undetected dwarf galaxies of the Local Group, instead of using results from the stack experiments (Sect.~\ref{s:lowmasssample}), we take for the purpose of analysis the upper limit of $B=4\,\mu$G from Chy\.zy et al. (\cite{chyzy11}) and determine $\mathrm{SSFR}=7.3\times 10^{-5}\,\mathrm{M_{\sun}\, yr^{-1}\, kpc^2}$ from the data presented in that work. The obtained position for these dwarfs is deflected slightly above the global $B$-SSFR trend. Therefore, these objects and those from the `common' sample were not included in other statistical analyses. Differential rotation and large-scale dynamo are indispensable to account for the ordered part of magnetic field in galaxies. In the work of Tabatabaei et al. (\cite{taba16}), only the ordered part of magnetic field was investigated for a sample of 26 galaxies and found to be correlated with the dynamic mass and the rotational velocities of galaxies. In our sample, only the total field was analysed, but it also showed the relationships with $V$ and $M$ of roughly similar strength ($\rho=0.30-035$). As the ordered field contributes just little to the total field, the argument of Tabatabaei et al. (\cite{taba16}) that the massive, faster-rotating galaxies compress and share turbulent magnetic field leading to stronger ordered fields is not valid for our $B-M$ and $B-V$ relations (see Fig.~\ref{f:corel}i). As shown in Sect.~\ref{s:regressions}, the total magnetic field $B$ in our objects is strongly associated with the star formation rate ($\rho=0.68$), and even more strongly with the SSFR ($\rho=0.78$). Such relationships can be explained by the turbulent energy injected to the ISM through supernova explosions and amplification of magnetic fields by a small-scale dynamo (Schleicher \& Beck \cite{schleicher16}). Hence, we suspect that $B$ is directly related to the SSFR or $SM_{\mathrm{H2}}$, while, since the amount of molecular gas available for star formation is related to the total mass of galaxies (Sect.~\ref{s:regressions}), the relation of $B$ with galactic mass or rotation is only an indirect one. In our sample, the $B$-SSFR relation is also fulfilled by dwarf galaxies and massive starbursts, which usually manifest slow or disordered rotation. We have shown that even dwarf galaxies with slow rotation and low mass (as e.g. IC\,10) can develop strong magnetic fields in the starburst phase. Therefore, for our sample of galaxies, it is the small-scale dynamo mechanism rather than the large-scale one that decisively determines the magnetic field strength. We note that some relations beween $B$ and intensive variables presented throughout this work could be stronger if they were determined only over the regions of high star-forming activity. In our approach, we applied the average values, based on the full extent of galaxies. Further investigation of these different approaches involving a larger sample of galaxies, from the upcoming large area radio continuum survey with the LOFAR (Shimwell et al.~\cite{shimwell16}) and the APERTIF (Verheijen et al.~\cite{verheijen09}) radio telescopes, are highly desirable. \begin{acknowledgements} This research was supported by Polish National Science Centre through grant 2012/07/B/ST9/04404. We thank Dr.~Rainer Beck and the anonymous referee for the detailed and constructive comments. We acknowledge the use of the HyperLeda (http://leda.univ-lyon1.fr) and NED (http://nedwww.ipac.caltech.edu) databases. \end{acknowledgements} \begin{appendix} \section{Information on galaxies} \begin{table*}[t] \caption{Breakdown of basic properties of the galaxy sample by category. } \centering \begin{tabular}{lccccccccc} \hline\hline Galaxy & Hubble & $B$ & SFR & $M_{\mathrm{HI}}$ & $M_{\mathrm{H2}}$ & $LK$ & $R$ & $V$ & References \\ & type $T$ & $\mu \mathrm{G}$ & $\mathrm{M_{\sun}}\,\mathrm{yr}^{-1}$ & $10^{8}\,\mathrm{M_{\sun}}$ & $10^{8}\,\mathrm{M_{\sun}}$ & $10^{8}\,\mathrm{erg\,s}^{-1}$ & kpc & $\mathrm{km\,s}^{-1}$ & for Columns\\ 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 3-7, 9 \\ \hline Dwarf and Magellanic-type & & & & & & & & & \\\hline NGC292(SMC) &8.9 & 3.2 & 0.05 & 4.2 & 0.3 & 6.8 & 2.9 & 43& 1, 1, 1, 40, 50, 16 \\ NGC1569 & 9.6 & 14.0 & 0.25 & 0.6 & 0.4 & 14 & 1.8 & 39& 1, 12, 23, 58, 24, 24\\ NGC2976 & 5.2 & 5.7& 0.09 & 2.0 & 0.6 & 26 & 3.1 & 58& 1, 13, 13, 13, 24 , 24 \\ NGC3239 & 9.8 & 6.9& 0.25 & 13 &N/A & 8.2 & 6.0 & 95& 1, 1, 1, - , 24, 24\\ NGC4027 & 7.8 & 9.0 & 1.82 & 40 &4.7& 439 & 10 &98& 1, 1, 1, 41, 24, 24 \\ NGC4214 & 9.8 & 13.0 & 0.11 & 5.0 & 0.1 & 10 & 3.6 & 42& 54, 13, 13, 13, 24, 13\\ NGC4236 & 8.0 & 4.4& 0.11 & 15 & 0.9& 9.4 & 14 &87& 1, 1, 1, 55, 24, 51 \\ NGC4449 & 9.8 & 12.0 & 0.37 & 16 &0.1& 46 & 3.8 & 59& 1, 13, 13, 13, 49, 24\\ NGC4605 & 5.0 & 6.4 & 0.17 & 2.0 & 0.4 & 48 & 4.6 & 61& 1, 1, 1, 56, 24, 24\\ NGC4618 & 8.6 & 6.0 & 0.18 & 11 &N/A & 37 & 4.8 & 66& 1, 1, 1, -, 24, 24\\ NGC4656 & 9.0 & 4.7 & 0.85 & 50 &N/A & 7.2 & 18 &60 & 1, 1, 1, -, 24, 24\\ NGC5204 & 8.9 & 6.3 & 0.05 & 6.3 & N/A & 5.8 & 3.4 & 55& 1, 1, 1, -, 24, 24\\ NGC6822 & 9.8 & 4.0 & 0.02 & 1.4 & 0.2 & 1.1 & 1.1 & 92 & 1, 1, 1, 42, 49, 26\\ UGC11861 & 7.6 & 5.4 & 0.48 & 87 &N/A & 183 & 10 &114& 1, 1, 1, -, 24, 24\\ UGC5456 & 9.3 & 3.4 & 0.02 & 1.9 & N/A & 5.7& 2.5 & 26& 10, 14, 24, -, 24, 24\\ HoII & 9.9 & 6.6 & 0.05 & 7.9 & 0.4 & 8.4& 3.9 & 29& 4, 13, 13, 13, 24, 13\\ IC10 & 9.9 & 13.5 & 0.06 & 0.9 & 0.6 & 3.8& 0.7 & 52& 5, 15, 1, 42 , 49 , 16\\ IC1613 & 9.9 & 2.8 & $<$0.01 &0.6 & 0.1 & 0.2 & 1.7 & 26& 1, 1, 1, 42, 50, 16\\ IC2574 & 8.9 & 4.0 & 0.07 & 19 &0.8 & 1.8 & 7.7 & 46& 1, 13, 13, 13, 50, 24\\ LMC & 9.1 & 4.3& 0.26 & 5.0 & 1.4 & 31 & 4.7 & 46& 1, 1, 1, 43, 50, 16\\\hline Spiral & & & & & & & & & \\\hline NGC0224(M31) &3.0 & 7.0 & 0.60 & 39 &2.7 & 421 & 19 &256& 11, 16, 25, 57, 49, 16\\ NGC598(M33) &5.9 & 6.1 & 0.24 & 14 &3.3 & 35 & 8.7 & 100& 11, 15, 26, 26, 49, 16\\ NGC628 & 5.2 & 6.0 & 0.81 & 50 &10 & 213 & 11 &217& 4, 13, 13, 13, 49, 13\\ NGC891 & 3.1 & 13.0 & 3.48 & 80 &35 & 647 & 16 &212& 11, 17, 27, 23, 49, 16\\ NGC925 & 7.0 & 6.0 & 0.56 & 63 &2.5 & 110 & 14 &104& 4, 13, 13, 13, 24, 13\\ NGC1097 & 3.3 & 13.0 & 5.90 & 83 &94 & 1390 & 19 &219& 11, 12, 28, 44, 49, 16\\ NGC1365 & 3.2 & 9.0 & 7.00 & 130 &170 & 2229 & 31 &198& 11, 16, 30, 30, 49, 16\\ NGC1566 & 4.0 & 13.0 & 3.53 & 74 &13 & 140 & 7.4 & 123& 11, 12, 31, 52, 24, 31\\ NGC2403 & 6.0 & 5.7 & 0.38 & 32 &0.2 & 74 & 10 & 120& 4, 13, 13, 13, 49, 13\\ NGC2841 & 2.9 & 7.2 & 0.74 & 126 &3.2 & 1633 & 16 & 319& 4, 13, 13, 13, 49, 13\\ NGC2903 & 4.0 & 7.9 & 3.00 & 44 &22 & 662 & 16 &188& 4, 8, 8, 8, 49, 24\\ NGC3031(M81) &2.4 & 7.5 & 0.76 & 27 &2.2 & 844 & 14 &216& 11, 18, 32, 23, 49, 24\\ NGC3184 & 5.9 & 7.2 & 0.90 & 40 &16 & 261 & 11 &208& 4, 13, 13, 13, 24, 13\\ NGC3198 & 5.2 & 4.9 & 0.93 & 126 &6.3 & 303 & 17 &137& 4, 13, 13, 13, 24, 13\\ NGC3627 & 3.1 & 10.4 & 2.22 & 10 &13 & 838 & 12 &174& 4, 13, 13, 13, 49, 13\\ NGC3628 & 3.1 & 9.0 & 2.15 & 34 &37 & 365 & 14 &215& 9, 12, 36, 23, 49, 16\\ NGC3992 (M109)&4.0& 6.0 & 1.40 & 80 &N/A & 2319 & 27 &295& 11, 21, 24, -, 49, 24\\ NGC4254 (M99)&5.2& 16.0 & 5.34 & 17 &85 & 993 & 13 &299& 11, 12, 23, 23, 24, 24\\ NGC4414 & 5.2 & 15.0 & 4.20 & 41 &24 & 1297 & 10 &217& 11, 19, 24, -, 24, 19\\ NGC4594 (M104)&1.1& 6.0 & 0.19 & 13 & 0.1 & 1831 & 11 & 232 & 11, 12, 37, 55, 49, 29\\ NGC4736 (M94)&2.3& 11.7 &0.48 &5.0 & 3.9 & 428 & 3.3 & 181 & 4, 13, 13, 13, 49, 16\\ NGC4826(M64) &2.2 & 5.9 & 0.28 & 5.5& 18 & 494 & 8.1 & 152 & 8, 12, 8, 8, 49, 24\\ NGC5055 & 4.0 & 8.5 & 2.12 & 126 &50 & 1257 & 18 &218 & 4, 13, 13, 13, 49, 13\\ NGC5194(M51) &4.0 & 13.0 & 3.13 & 32 &25 & 881 & 13 &219 & 4, 13, 13, 13, 49, 13\\ NGC5236(M83) &5.0 & 12.0 & 2.34 & 90 &32 & 710 & 8.9& 170 & 7, 12, 13, 23, 49, 22\\ NGC5457(M101)&5.9& 6.4 & 0.57 & 142 &38 & 747 & 25 &274 & 3, 3, 38, 45, 49, 38 \\ NGC5775 & 5.1 & 11.0 & 3.60 & 16 &75 & 1224 & 16 &187 & 11, 20, 23, 23, 24, 24\\ NGC5907 & 5.2 & 5.0 & 2.17 & 69 &9.0 & 1160 & 30 &226 & 11, 17, 35, 35, 49, 16\\ NGC6946 & 5.9 & 12.7 & 3.24 & 63 &40 & 540 & 9.9& 314 & 4, 13, 13, 13, 49, 16\\ NGC7331 & 3.9 & 9.4& 2.99 & 126 &50 & 1825 & 22 &252 & 4, 13, 13, 13, 49, 13\\ IC342 & 6.0 & 9.0 & 1.89 & 16 &75 & 352 & 10 &230 & 11, 12, 39, 23, 49, 16 \\\hline Massive starburst/LIRG & & & & & & & & & \\\hline NGC253 & 5.1 & 15.0 & 4.94 & 19 &70 & 1051 & 15 & 189 & 11, 16, 28, 23, 49, 16\\ NGC3034 (M82)&7.2& 35.0 & 7.87 & 7.5 & 20 & 451 & 6.3 & 200 & 2, 12, 32, 47, 49, 53\\ NGC3256 & 4.0 & 25.0 & 80.7 &62 &710 & 3793 & 30 & 123 & 6, 6, 33, 48, 24, 33 \\ ARP220 & 9.3 & 27.0 & 150 &46 &275 & 1407 & 17& 175 & 6, 6, 34, 46, 24, 24\\\hline \hline \end{tabular} \label{t:basic} \begin{flushleft} Notes: Data for Col. 2, and 8 are from HyperLeda and NED. References: (1) Jurusik et al.~\cite{jurusik14}, (2) Adebahr et al.~\cite{adebahr13} (3) Berkhuijsen ~\cite{berkhuijsen16}, (4) Braun et al. ~\cite{braun07}, (5) Chy\.zy et al.~\cite{chyzy16}, (6) Drzazga et al. ~\cite{drzazga11}, (7) Neininger et al. ~\cite{neininger93}, (8) Heesen et al. \cite{heesen14}, (9) Nikiel - Wroczy\'nski et al. ~\cite{nikiel13}, (10) this paper, (11) Van Eck et al. ~\cite{vaneck15}, (12) Calzetti et al. ~\cite{calzetti10}, (13) Leroy et al. ~\cite{leroy08}, (14) Roychowdhury et al. ~\cite{roy12}, (15) Woo et al ~\cite{wo08}, (16) Tabatabaei et al. ~\cite{taba16}, (17) Misiriotis et al. ~\cite{misiriotis01}, (18) Karachentsev et al. ~\cite{kara07}, (19) de Blok et al. ~\cite{deblock14}, (20) Irvin ~\cite{irvin94}, (21) Martinet \& Friedli ~\cite{martinet97}, (22) Heald et al. ~\cite{heald16}, (23) Liu et al. ~\cite{liu15}, (24) LEDA, (25) Cram et al. ~\cite{cram80}, (26) Gratier et al. ~\cite{gratier10}, (27) Sancisi \& Allen ~\cite{sancisi79}, (28) Koribalski et al. ~\cite{koribalski04}, (29) van der Marel et al. ~\cite{marel94}, (30) Lindblad ~\cite{lindblad99}, (31) Pence et al. ~\cite{pence90}, (32) Chynoweth et al. ~\cite{chynoweth08}, (33) English et al. ~\cite{english03}, (34) Baan et al. ~\cite{baan87}, (35) Dumke et al. ~\cite{dumke97}, (36) Huchtmeier et al. ~\cite{huchtmeier85}, (37) Bajaja et al. ~\cite{bajaja84}, (38) Walter et al. ~\cite{walter08}, (39) Rots ~\cite{rots79}, (40) Leroy et al. ~\cite{leroy07}, (41) Casasola et al. ~\cite{casasola04}, (42) Mateo ~\cite{mateo98}, (43) Cohen et al. ~\cite{cohen88}, (44) Crosthwaite ~\cite{crosthwaite01}, (45) Kenney et al. ~\cite{kenney91}, (46) Papadopoulos et al. ~\cite{papa12}, (47) Young \& Scoville ~\cite{young84}, (48) Sargent et al. ~\cite{sargent89}, (49) Jarrett et al. ~\cite{jarrett03}, (50) NED, (51) Chy\.zy et al. ~\cite{chyzy11}, (52) Combes et al. ~\cite{combes14}, (53) Sofue et al. ~\cite{sofue99}, (54) Kepley et al. ~\cite{kepley11}, (55) Wilson et al. ~\cite{wilson12}, (56) Throson \& Bally ~\cite{throson87}, (57) Dame et al. ~\cite{dame93}, (58) Israel ~\cite{israel88} \end{flushleft} \end{table*} \end{appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,372
Parents, Governmental Leaders Meet at National Parents Meet to Celebrate Progress in the Disability Movement in India By: 3BL Media: Corporate Social Responsibility, Energy and Health News SOURCE: Keystone Human Services The 24th National Parents Meet in Jalandhar, Punjab was a potent mix of passion, gritty determination, and celebration. From across India, several hundred family members of people with developmental disabilities came together, along with their sons and daughters, to meet, plan, share, learn, and celebrate the hard-won gains they have made. As had been the case in most places, it is the coming together of the parent groups in a common voice which has formed the backbone of the advocacy movement for people with intellectual disabilities. The disability movement is India is a smaller community than you might think, so it was no surprise for me to reconnect with people I had met at other events all across India. In this way, I had a small taste of the kinds of strong connection that these families have for each other through their long and faithful work towards creating a world where their children have access to the good things of Indian life. Governmental leaders and professionals came to speak and listen, and were respectful to parent leaders, recognizing their natural authority and their experience. Generational leadership changes within the parents association, the self-advocacy movement, and the field itself speak to the change that is in the air for India, for the many people with disability present at the gathering, and for families. As an auspicious sign of "making it real", which was the conference theme, a young man with disability who was present for the conference was introduced to the Pro-Chancellor of the University where the event was taking place. In the course of that very day, he was interviewed, offered, and accepted a position at the University which includes housing and full benefits. Now that speaks to the power of community networks, the importance of people with disability to be in regular life, and the potential that typical people can and will open doors to welcome people with disability when given the opportunity. Tweet me: Parents, Governmental Leaders Meet at PARIVAAR's National Parents Meet in #India http://bit.ly/2f83cZ9 via @KeystoneIndia #inclusion Betsy Neuville Keystone Institute India eneuville@keystonehumanservices.org KEYWORDS: Finance & Socially Responsible Investment, Diversity & Human Resources, India, Disabilities, inclusion, #InclusionWorks, Keystone Human Services, Keystone Institute India
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,701
\section{Introduction} \begin{abstract} A seminal work of \cite{jacot2018neural} demonstrated that training a neural network under certain parameterization is equivalent to performing a certain kernel method as width goes to infinity. This equivalence opened a promising direction of applying results of rich literature on kernel methods to neural nets which were much harder to tackle. The present survey covers key results on kernel convergence as width goes to infinity, finite-width corrections, applications, and discussion of limitations of the corresponding method. \end{abstract} \section{Definition and the explicit solution for square loss} \label{sec:definition} Consider a generic parametric model $f(x; \theta): \, \mathcal{X} \times \mathbb{R}^N \to \mathbb{R}$ differentiable with respect to weights $\theta$. We aim to minimize square loss over a dataset $(\vec x, \vec y)$ of size $m$: $\frac{1}{2} \sum_{j=1}^m (y_j - f(x_j; \theta))^2 \to \min_\theta$. A continuous-time gradient descent dynamics (gradient flow) corresponds to the following ordinary differential equation (ODE): \begin{equation} \dot\theta_t = -\nabla_\theta\left(\frac{1}{2} \sum_{j=1}^m (y_j - f(x_j; \theta_t))^2\right) = \sum_{j=1}^m (y_j - f(x_j; \theta_t)) \nabla_\theta f(x_j; \theta_t). \end{equation} Let us abbreviate the prediction at a given data point $x$ at time $t$, $f(x; \theta_t)$, as $f_t(x)$. Under the dynamics above, this quantity evolves as \begin{equation} \dot f_t(x) = \dot\theta_t^T \nabla f_t(x) = \sum_{j=1}^m (y_j - f_t(x_j)) \nabla_\theta^T f_t(x_j) \nabla_\theta f_t(x). \label{eq:f_t_dynamics} \end{equation} If we perceive $\nabla_\theta f_t(x)$ as a feature map $\Phi_t: \, \mathcal{X} \to \mathbb{R}^N$, the scalar product above becomes a kernel evaluated at a pair $(x_j,x)$. This kernel is called an empirical neural tangent kernel (NTK) and is denoted by $\hat\Theta_t$: \begin{equation} \hat\Theta_t(x,x') = \nabla_\theta^T f_t(x) \nabla_\theta f_t(x'). \end{equation} This definition allows for a shorter representation of the prediction dynamics (\ref{eq:f_t_dynamics}): \begin{equation} \dot f_t(x) = \hat\Theta_t(x,\vec x) (\vec y - f_t(\vec x)), \label{eq:f_t_dynamics_emp_ntk} \end{equation} where by convention, $\hat\Theta_t(x,\vec x) \in \mathbb{R}^{1 \times m}$. Assume that the empirical NTK does not evolve with time, i.e $\hat\Theta_t(x,x') = \hat\Theta_0(x,x')$ $\forall x,x' \in \mathcal{X}$. This assumption is equivalent to assuming the model $f(x;\theta)$ to be linear as a function of its weights: \begin{equation} f(x; \theta) = f(x; \theta_0) + \nabla_\theta^T f(x; \theta_0) (\theta - \theta_0). \end{equation} When the kernel is constant, Eq.(\ref{eq:f_t_dynamics_emp_ntk}) is easily integrable. Indeed, on the train dataset, \begin{equation} \dot f_t(\vec x) = \hat\Theta_0(\vec x,\vec x) (\vec y - f_t(\vec x)), \label{eq:f_t_dynamics_train} \end{equation} which gives \begin{equation} f_t(\vec x) = f_0(\vec x) - \left(I - e^{-\hat\Theta_0(\vec x, \vec x) t}\right) (f_0(\vec x) - \vec y). \end{equation} Plugging it back to Eq.(\ref{eq:f_t_dynamics_emp_ntk}) gives \begin{equation} \dot f_t(x) = \hat\Theta_t(x,\vec x) e^{-\hat\Theta_0(\vec x, \vec x) t} (\vec y - f_0(\vec x)), \end{equation} and finally, \begin{equation} f_t(x) = f_0(x) - \hat\Theta_0(x, \vec x) \hat\Theta_0^{-1}(\vec x, \vec x) \left(I - e^{-\hat\Theta_0(\vec x, \vec x) t}\right) (f_0(\vec x) - \vec y). \label{eq:lin_solution_square_loss} \end{equation} While the exact solution above is based on the constant kernel assumption, one can prove that the kernel is indeed nearly constant in certain settings, see \cref{sec:convergence}. This allows one to transfer results that hold for linearized models to original ones. For example, $f_t(\vec x)$ converges to $\vec y$ (i.e. the model learns the dataset) as long as the Gram matrix is positive definite: $\hat\Theta_0(\vec x,\vec x) \geq \lambda_0$ for some $\lambda_0 > 0$, see Eq.(\ref{eq:f_t_dynamics_train}). The same result holds without the constant kernel assumption, as long as $\hat\Theta_t(\vec x,\vec x)$ stays sufficiently close to $\hat\Theta_0(\vec x,\vec x)$, and therefore, say, $\hat\Theta_t(\vec x,\vec x) \geq \lambda_0/2$. Indeed, \begin{equation} \frac{d}{dt}\left(\frac{1}{2} \| \vec y - f_t(\vec x) \|_2^2\right) = -(\vec y - f_t(\vec x))^T \hat\Theta_t(\vec x, \vec x) (\vec y - f_t(\vec x)) \leq -\frac{\lambda_0}{2} \| \vec y - f_t(\vec x) \|_2^2, \end{equation} which gives \begin{equation} \| \vec y - f_t(\vec x) \|_2^2 \leq e^{-\lambda_0 t} \| \vec y - f_0(\vec x) \|_2^2 \to 0 \quad \text{as $t \to \infty$}; \end{equation} see \cite{du2018gradient} for the formal result. This result is not trivial, since loss surfaces of generic neural nets are non-convex, and therefore any local optimization method (e.g. the gradient flow) may get stuck in a spurious local minimum. See \cite{arora2019fine} for other results of a similar kind. Also, if one assumes the kernel to be nearly constant, one can identify certain pathologies affecting the learning process by analyzing the initial kernel: see \cite{martens2021rapid} discussing trainability of very deep nets and \cite{dupuis2021dnn,tancik2020fourier} fixing blurry results of image regression. Finally, the exact solution (\ref{eq:lin_solution_square_loss}) can be used as a substitute for the usual gradient descent training routine. A naive approach for evaluating Eq.(\ref{eq:lin_solution_square_loss}) would be to compute the initial kernel $\hat\Theta_0(\vec x, \vec x)$ and then to invert it. Naively computing the kernel requires $O(N m^2)$ time and $O(m^2)$ memory, while inverting it takes $O(m^3)$ more time. Such an approach is infeasible for datasets of realistic sizes (i.e. $m \gtrsim 10^5$), asking for major optimizations, see \cite{novak2019neural,novakfast,meanti2020kernel}. Nevertheless, for $m \lesssim 10^4$, the direct approach is feasible and gives promising results, see \cite{arora2019harnessing}. Also, in certain scenarios, the kernel can be efficiently scaled from small $m$ to larger ones, see \cite{radhakrishnan2021simple}. \section{Kernel convergence} \label{sec:convergence} The goal of this section is to validate the constant kernel assumption: $\hat\Theta_t(x,x') = \hat\Theta_0(x,x')$ $\forall x,x' \in \mathcal{X}$. The main result is: under certain parameterization, the empirical NTK of a neural network becomes constant as width goes to infinity. Before stating this result formally, we provide an illustrative example. Consider a neural network with one hidden layer, scalar input, and Gaussian-initialized weights: \begin{equation} f(x; a_{1:n}, w_{1:n}) = \sum_{i=1}^n a_i \phi(w_i x), \quad a_{1:n} \sim \mathcal{N}(0, n^{-1} I), \quad w_{1:n} \sim \mathcal{N}(0, I). \label{eq:1_hid_net_standard} \end{equation} Here $n$ is width of the hidden layer; following a standard initialization scheme \cite{he2015delving}, initialization variance of each layer is inversely proportional to the number of input neurons. The above parameterization of the network is the one typically used in practice; we shall refer it as standard. However, the parameterization we need is a different one: \begin{equation} f(x; a_{1:n}, w_{1:n}) = \frac{1}{\sqrt{n}} \sum_{i=1}^n a_i \phi(w_i x), \quad a_{1:n} \sim \mathcal{N}(0, I), \quad w_{1:n} \sim \mathcal{N}(0, I). \label{eq:1_hid_net_ntk} \end{equation} We shall refer it as NTK-parameterization. Note that it does not alter the distribution of neurons, both hidden and output, at initialization but it does alter the gradient flow: \begin{equation} \dot a_k = \frac{1}{\sqrt{n}} \sum_{j=1}^m \phi(w_k x_j), \quad \dot w_k = \frac{1}{\sqrt{n}} \sum_{j=1}^m a_k \phi'(w_k x_j) x_j. \end{equation} Here input and output weights receive $O(n^{-1/2})$ increments, while both of them are $O(1)$ at initialization. Hence $a_k(t) \to a_k(0)$ and $w_k(t) \to w_k(0)$ as $n \to \infty$ for any fixed $k \in \mathbb{N}$ and $t \in \mathbb{R}_+$. Compare with gradient flow under standard parameterization: \begin{equation} \dot a_k = \sum_{j=1}^m \phi(w_k x_j), \quad \dot w_k = \sum_{j=1}^m a_k \phi'(w_k x_j) x_j. \end{equation} Here the output weights are $O(n^{-1/2})$ at initialization but receive $O(1)$ increments for $t=0$, while the input weights are $O(1)$ at initialization but receive $O(n^{-1/2})$ increments for $t=0$. Let us write the NTK under NTK parameterization: \begin{multline} \hat\Theta_t(x,x') = \sum_{i=1}^n \left(\partial_{a_i} f(x) \partial_{a_i} f(x') + \partial_{w_i} f(x) \partial_{w_i} f(x')\right) =\\= \frac{1}{n} \sum_{i=1}^n \left(\phi(w_i(t) x) \phi(w_i(t) x') + a_i^2(t) \phi'(w_i(t) x) \phi'(w_i(t) x') x x'\right). \end{multline} Since $a_k(t) \to a_k(0)$ and $w_k(t) \to w_k(0)$ as $n \to \infty$ for any fixed $k \in \mathbb{N}$ and $t \in \mathbb{R}_+$, the above expression is asymptotically equivalent to \begin{equation} \hat\Theta_0(x,x') = \frac{1}{n} \sum_{i=1}^n \left(\phi(w_i(0) x) \phi(w_i(0) x') + a_i^2(0) \phi'(w_i(0) x) \phi'(w_i(0) x') x x'\right), \end{equation} which converges (almost surely) to \begin{equation} \Theta(x,x') = \mathbb{E}\,_{a,w \sim \mathcal{N}(0,1)} \left(\phi(w x) \phi(w x') + a^2 \phi'(w x) \phi'(w x') x x'\right) \end{equation} as $n \to \infty$ due to the (strong) Law of Large Numbers. The limit kernel $\Theta(x,x')$ depends neither on a timestep $t$, nor on initialization. This kernel is typically referred as NTK, contrasting to the empirical NTK $\hat\Theta_t$. Since under standard parameterization the weights receive increments asymptotically at least comparable to initialization, one cannot expect that the empirical NTK stops evolving as $n \to \infty$ in this setting. Moreover, the initial empirical NTK diverges with width: \begin{multline} \hat\Theta_0(x,x') = \sum_{i=1}^n \left(\phi(w_i(0) x) \phi(w_i(0) x') + a_i^2(0) \phi'(w_i(0) x) \phi'(w_i(0) x') x x'\right) \sim\\\sim n \times \mathbb{E}\,_{w \sim \mathcal{N}(0,1)} \phi(w x) \phi(w x'). \end{multline} The above kernel convergence result holds in more general settings. Consider a fully-connected network with $L$ layers under NTK parameterization: \begin{equation} f(x) = h_L(x), \quad h_l(x) = \frac{1}{\sqrt{n_{l-1}}} W_l x_{l-1}(x), \quad x_{l-1}(x) = \phi(h_{l-1}(x)), \quad x_0(x) = x, \end{equation} where $W_1 \in \mathbb{R}^{n_1 \times n_0}$, $W_L \in \mathbb{R}^{1 \times n_{L-1}}$, and $W_l \in \mathbb{R}^{n_l \times n_{l-1}}$ for all other $l$. Here all weights are initialized with independent standard Gaussians. Suppose we aim to optimize a generic differentiable loss $\ell$ instead of the quadratic one: \begin{equation} \dot\theta_t = -\nabla_\theta\left(\sum_{j=1}^m \ell(y_j, f(x_j; \theta_t))\right) = \sum_{j=1}^m \left.\frac{\partial \ell(y_j, z)}{\partial z}\right|_{z=f(x_j; \theta_t)} \nabla_\theta f(x_j; \theta_t), \end{equation} where $\theta$ now is a concatenation of all weights $W_{1:L}$. The seminal work of \cite{jacot2018neural} proves the following: \begin{theorem}[\cite{jacot2018neural}] Under the conditions above, for $\phi$ being $C^2$ and Lipschitz and $\ell$ being $C^1$ and Lipschitz, $\hat\Theta_t(x,x') \to \Theta(x,x')$ in probability as $n_{1:L-1} \to \infty$ sequentially $\forall x,x' \in \mathcal{X}$ $\forall t \geq 0$. \end{theorem} In fact, the theorem above can be generalized far from fully-connected nets with smooth activation functions. Define a tensor program as a set of initial variables of certain types and a sequence of operations. Each of the operations generates a new variable by acting on previously generated ones. The variable types are \begin{enumerate} \item $\mathsf{A}$: $n \times n$ matrices with iid $\mathcal{N}(0,1)$ entries; \item $\mathsf{G}$: vectors of size $n$ with asymptotically iid Gaussian entries; \item $\mathsf{H}$: images of $\mathsf{G}$-vars by coordinatewise nonlinearities. \end{enumerate} The operations are \begin{enumerate} \item $\mathrm{Trsp}$: $W: \mathsf{A} \to W^\top: \mathsf{A}$; \item $\mathrm{MatMul}$: $(W: \mathsf{A}, \; x: \mathsf{H}) \to \frac{1}{\sqrt{n}} W x: \mathsf{G}$; \item {$\mathrm{LinComb}$: $(\{x_i: \mathsf{G}, \; a_i \in \mathbb{R}\}_{i=1}^k) \to \sum_{i=1}^k a_i x_i: \mathsf{G}$;} \item $\mathrm{Nonlin}$: $(\{x_i: \mathsf{G}\}_{i=1}^k, \; \phi: \mathbb{R}^k \to \mathbb{R}) \to \phi(x_{1:k}): \mathsf{H}$. \end{enumerate} The set of initial variables consists of variables of $\mathsf{A}$-type and $\mathsf{G}$-type. As for input $\mathsf{G}$-vars, we sample $\{x_\alpha: \text{$x$ is an input G-var}\} \sim \mathcal{N}(\mu^{in}, \Sigma^{in})$ $\forall \alpha \in [n]$. The above formalism allows to express forward and backward passes of a very wide class of neural nets (including RNNs, ResNets, and Transformers). Besides none of the operations above generates new $\mathsf{A}$-vars (new weights), the whole gradient descent training process can be expressed as a single tensor program by backtracking the gradient steps. The real power of tensor programs comes from the following theorem: \begin{theorem}["Master theorem", \cite{yang2020tensor_iii}] \label{thm:master_theorem} Consider a tensor program with $M$ $\mathsf{G}$-vars, under above assumptions. Suppose all the nonlinearities $\phi$ and a function $\psi: \, \mathbb{R}^M \to \mathbb{R}$ are polynomially bounded. Then the following holds: \begin{equation} \frac{1}{n} \sum_{\alpha=1}^n \psi(g^1_\alpha,\ldots,g^M_\alpha) \to \mathbb{E}\,_{Z \sim \mathcal{N}(\mu,\Sigma)} \psi(Z) \end{equation} a.s. as $n \to \infty$, where $\mu$ and $\Sigma$ can be computed using certain recurrent rules. \end{theorem} It is possible to define the empirical NTK of a tensor program and express it in the form $\frac{1}{n} \sum_{\alpha=1}^n \psi(g^1_\alpha,\ldots,g^M_\alpha)$ for a certain function $\psi$. Then the kernel converges by virtue of the above theorem. See \cite{yang2020tensor_ii} for the proof of initial kernel convergence and \cite{yang2021tensor_iib} for the proof of kernel convergence for any timestep. As an illustration, recall the two-layered net considered at the beginning of the present section. Its empirical NTK is given by \begin{equation} \hat\Theta_0(x,x') = \frac{1}{n} \sum_{i=1}^n \left(\phi(w_i(0) x) \phi(w_i(0) x') + a_i^2(0) \phi'(w_i(0) x) \phi'(w_i(0) x') x x'\right). \end{equation} Here $\mathsf{G}$-vars are $g^1 = w(0) x$, $g^2 = w(0) x'$, $g^3 = a(0) x$, $g^4 = a(0) x'$. Taking $\psi(g^1_\alpha,\ldots,g^4_\alpha) = \phi(g^1_\alpha) \phi(g^2_\alpha) + \phi'(g^1_\alpha) \phi'(g^2_\alpha) g^3_\alpha g^4_\alpha$ allows for explicit application of Theorem \ref{thm:master_theorem}. \section{Finite-width corrections} \label{sec:finite_width} While the results discussed in \cref{sec:convergence} hold in the limit of infinite width, they are not directly applicable to real-life finite-width nets for obvious reasons. This motivates one to introduce finite-width corrections for the limit NTK. First, define a higher-order kernel: \begin{equation} O_{s,t}(x_{1:s}) = \nabla^T_\theta O_{s-1,t}(x_{1:s-1}) \nabla_\theta f_t(x_s). \end{equation} Put $O_{1,t}(x_1) = f_t(x_1)$; this gives $O_{2,t}(x_1,x_2) = \hat\Theta_t(x_1,x_2)$. Consider a gradient flow optimization process under square loss: \begin{equation} \dot\theta_t = \sum_{j=1}^m (y_j - f_t(x_j)) \nabla_\theta f_t(x_j). \end{equation} Under this process, the $s$-order kernel evolves as \begin{equation} \dot O_{s,t}(x_{1:s}) = \nabla^T_\theta O_{s,t}(x_{1:s}) \dot\theta = O_{s+1,t}(x_{1:s},\vec x) (\vec y - f_t(\vec x)). \end{equation} This gives an infinite system of ODE's governing the evolution of the kernels. If our goal is to obtain a solution ony up to the order of $n^{-1}$, will it allow us to truncate the initially infinite system? How many equations should we keep? In order to answer these questions, let us estimate the order of growth for $O_{s,t}$. Following \cite{Dyer2020Asymptotics}, we start with a definition of a correlation function. Let us fix $t=0$ and omit the corresponding subscript for now. Define a rank-$k$ derivative tensor $T_{\mu_1 \ldots \mu_k}$ as follows: \begin{equation} T_{\mu_1 \ldots \mu_k}(x; f) = \frac{\partial^k f(x)}{\partial \theta^{\mu_1} \ldots \partial \theta^{\mu_k}}. \end{equation} For $k=0$ we define $T(x; f) = f(x)$. We are now ready to define a correlation function $C$: \begin{equation} C(x_1,\ldots,x_m) = \sum_{\mu_1,\ldots,\mu_{k_m}} \Delta_{\mu_1 \ldots \mu_{k_m}}^{(\pi)} \mathbb{E}\,_\theta \left( T_{\mu_1 \ldots \mu_{k_1}}(x_1) T_{\mu_{k_1+1} \ldots \mu_{k_2}}(x_2) \ldots T_{\mu_{k_{m-1}+1} \ldots \mu_{k_m}}(x_m) \right). \end{equation} Here $0 \leq k_1 \leq \ldots \leq k_m$, $k_m$ and $m$ are even, $\pi \in S_{k_m}$ is a permutation, and $\Delta_{\mu_1 \ldots \mu_{k_m}}^{(\pi)} = \delta_{\mu_{\pi(1)} \mu_{\pi(2)}} \ldots \delta_{\mu_{\pi(k_m-1)} \mu_{\pi(k_m)}}$. For example, \begin{multline} \mathbb{E}\,_\theta (f(x) \nabla^T_\theta f(x) \nabla_\theta \nabla^T_\theta f(x_1) \nabla_\theta f(x_2)) = \sum_{\mu,\nu} \mathbb{E}\,_\theta (f(x) \partial_\mu f(x) \partial^2_{\mu,\nu} f(x_1) \partial_\nu f(x_2)) =\\= \sum_{\mu_1,\mu_2,\mu_3,\mu_4} \delta_{\mu_1 \mu_2} \delta_{\mu_3 \mu_4} \mathbb{E}\,_\theta (f(x) \partial_{\mu_1} f(x) \partial^2_{\mu_2,\mu_3} f(x_1) \partial_{\mu_4} f(x_2)) = C(x,x,x_1,x_2) \label{eq:dtheta_dt_as_corr_f} \end{multline} is a correlation function with $m=4$, $k_1=0$, $k_2=1$, $k_3=3$, $k_4=4$, and $\pi(j) = j$. If two derivative tensors have two indices that are summed over, we say that they are contracted. Formally, we say that $T_{\mu_{k_{i-1}+1} \ldots \mu_{k_i}}(x_i)$ is contracted with $T_{\mu_{k_{j-1}+1} \ldots \mu_{k_j}}(x_j)$ for $1 \leq i,j \leq m$ if there exists an even $s \leq k_m$ such that $k_{i-1} < \pi(s-1) \leq k_i$, while $k_{j-1} < \pi(s) \leq k_j$, or vice versa. Define the cluster graph $G_C(V,E)$ as a non-oriented non-weighted graph with vertices $V = \{v_1, \ldots, v_m\}$ and edges $E = \{(v_i,v_j) \, | \, \text{$T(x_i)$ and $T(x_j)$ are contracted in $C$}\}$. Let $n_e$ be the number of even-sized connected components of $G_C(V,E)$ and $n_o$ be the number of odd-sized components. We are going to use the following conjecture, which is proven in certain scenarios: \begin{conjecture}[\cite{Dyer2020Asymptotics}] \label{conj:C_asymptotics} If $m$ is even, $C(x_1,\ldots,x_m) = O_{n\to\infty}(n^{s_C})$, where $s_C = n_e + n_o / 2 - m / 2$. If $m$ is odd, $C(x_1,\ldots,x_m) = 0$. \end{conjecture} We are also going to use the following lemma: \begin{lemma}[\cite{Dyer2020Asymptotics}] \label{lemma:derivative_asymptotics} Suppose \cref{conj:C_asymptotics} holds. Let $C(\vec x) = \mathbb{E}\,_\theta F(\vec x; \theta)$ be a correlation function and suppose $C(\vec x) = O(n^{s_C})$ for $s_C$ defined in \cref{conj:C_asymptotics}. Then $\mathbb{E}\,_\theta d^k F(\vec x; \theta) / dt^k = O(n^{s_C})$ $\forall k \geq 1$. \end{lemma} \begin{proof} Consider the first derivative: \begin{multline} \mathbb{E}\,_\theta \frac{dF(\vec x)}{dt} = \mathbb{E}\,_\theta (\dot\theta^T \nabla_\theta F(\vec x)) = \mathbb{E}\,_{x,y} \mathbb{E}\,_\theta (\eta (y - f(x)) \nabla^T_\theta f(x) \nabla_\theta F(\vec x)) =\\= \eta \mathbb{E}\,_{x,y} \mathbb{E}\,_\theta (y \nabla^T_\theta f(x) \nabla_\theta F(\vec x)) - \eta \mathbb{E}\,_{x,y} \mathbb{E}\,_\theta (f(x) \nabla^T_\theta f(x) \nabla_\theta F(\vec x)). \end{multline} This is a sum of a linear combination of correlation functions. By \cref{conj:C_asymptotics}, the first sum evaluates to zero, while the second one has $m' = m+2$, $n_e'$ even clusters, and $n_o'$ odd clusters. If $\nabla_\theta f(x)$ is contracted with an even cluster of $C$, we have $n_e' = n_e - 1$, $n_o' = n_o + 2$. In contrast, if $\nabla_\theta f(x)$ is contracted with an odd cluster of $C$, we have $n_e' = n_e + 1$, $n_o' = n_o$. In the first case, we have $s_C' = n_e' + n_o'/2 - m'/2 = s_C - 1$, while for the second $s_C' = s_C$. In any case, the result is a linear combination of correlation functions with $s_C' \leq s_C$ for each. \end{proof} Let us return the $t$-subscript. Since $O_s$ has $s$ derivative tensors and a single cluster, by virtue of \cref{conj:C_asymptotics}, $\mathbb{E}\,_\theta O_{s,0} = O(n^{1 - s/2})$ for even $s$ and $\mathbb{E}\,_\theta O_{s,0} = 0$ for odd $s$. At the same time, $\mathbb{E}\,_\theta \dot O_{s,0} = O(n^{1 - (s+2)/2}) = O(n^{-s/2})$ for even $s$ and $\mathbb{E}\,_\theta \dot O_{s,0} = O(n^{1 - (s+1)/2}) = O(n^{1/2 - s/2})$ for odd $s$. As for the second moments, we have $\mathbb{E}\,_\theta (O_{s,0})^2 = O(n^{2 - s})$ for even $s$ and $\mathbb{E}\,_\theta (O_{s,0})^2 = O(n^{1 - s})$ for odd $s$. Similarly, we have $\mathbb{E}\,_\theta (\dot O_{s,0})^2 = O(n^{2/2 - (2s+2)/2}) = O(n^{-s})$ for even $s$ and $\mathbb{E}\,_\theta (\dot O_{s,0})^2 = O(n^{2 - (2s+2)/2}) = O(n^{1 - s})$ for odd $s$. The asymptotics for the first two moments implies the asymptotic for a random variable itself: \begin{equation} O_{s,0}(x_{1:s}) = \begin{cases} O(n^{1 - s/2}) &\text{for even $s$;} \\ O(n^{1/2 - s/2}) &\text{for odd $s$;} \end{cases} \qquad \dot O_{s,0}(x_{1:s}) = \begin{cases} O(n^{-s/2}) &\text{for even $s$;} \\ O(n^{1/2 - s/2}) &\text{for odd $s$.} \end{cases} \end{equation} \cref{lemma:derivative_asymptotics} gives $\forall k \geq 1$: \begin{equation} \left.\frac{d^k O_{s,t}}{dt^k}(x_{1:s})\right|_{t=0} = \begin{cases} O(n^{-s/2}) &\text{for even $s$;} \\ O(n^{1/2 - s/2}) &\text{for odd $s$.} \end{cases} \end{equation} Then given an analytic activation function, we have $\forall t \geq 0$: \begin{equation} \dot O_{s,t}(x_{1:s}) = \sum_{k=1}^\infty \left.\frac{d^k O_{s,t}}{dt^k}(x_{1:s})\right|_{t=0} \frac{t^k}{k!} = \begin{cases} O(n^{-s/2}) &\text{for even $s$;} \\ O(n^{1/2 - s/2}) &\text{for odd $s$.} \end{cases} \end{equation} This allows us to write a finite system of ODE for the model evolution up to $O(n^{-1})$ terms: \begin{equation} \dot f_{t}(x_1) = O_{2,t}(x_1, \vec x) (\vec y - f_t(\vec x)), \qquad f_0(x_1) = f(x_1; \theta), \quad \theta \sim \mathcal{N}(0, I), \end{equation} \begin{equation} \dot O_{2,t}(x_1, x_2) = O_{3,t}(x_1, x_2, \vec x) (\vec y - f_t(\vec x)), \qquad O_{2,0}(x_1, x_2) = \nabla_\theta^T f_0(x_1) \nabla_\theta f_0(x_2), \end{equation} \begin{equation} \dot O_{3,t}(x_1, x_2, x_3) = O_{4,t}(x_1, x_2, x_3, \vec x) (\vec y - f_t(\vec x)), \qquad O_{3,0}(x_1, x_2, x_3) = \nabla_\theta^T O_{2,0}(x_1, x_2) \nabla_\theta f_0(x_3), \end{equation} \begin{equation} \dot O_{4,t}(x_1, x_2, x_3, x_4) = O(n^{-2}), \qquad O_{4,0}(x_1, x_2, x_3, x_4) = \nabla_\theta^T O_{3,0}(x_1, x_2, x_3) \nabla_\theta f_0(x_4). \end{equation} Let us expand all the quantities wrt $n^{-1}$: \begin{equation} O_{s,t}(x_{1:s}) = O_{s,t}^{(0)}(x_{1:s}) + n^{-1} O_{s,t}^{(1)}(x_{1:s}) + O(n^{-2}), \end{equation} where $O_{s,t}^{(k)}(x_{1:s}) = \Theta_{n\to\infty}(1)$. Then the system above transforms into the following: \begin{equation} \dot f_{t}^{(0)}(x_1) = O_{2,t}^{(0)}(x_1, \vec x) (\vec y - f_t^{(0)}(\vec x)), \end{equation} \begin{equation} \dot f_{t}^{(1)}(x_1) = O_{2,t}^{(1)}(x_1, \vec x) (\vec y - f_t^{(0)}(\vec x)) - O_{2,t}^{(0)}(x_1, \vec x) f_t^{(1)}(\vec x), \end{equation} \begin{equation} O_{2,t}^{(0)}(x_1, x_2) = \nabla_\theta^T f_0^{(0)}(x_1) \nabla_\theta f_0^{(0)}(x_2), \end{equation} \begin{equation} \dot O_{2,t}^{(1)}(x_1, x_2) = O_{3,t}^{(1)}(x_1, x_2, \vec x) (\vec y - f_t^{(0)}(\vec x)), \end{equation} \begin{equation} \dot O_{3,t}^{(1)}(x_1, x_2, x_3) = O_{4,t}^{(1)}(x_1, x_2, x_3, \vec x) (\vec y - f_t^{(0)}(\vec x)), \end{equation} \begin{equation} O_{4,t}^{(1)}(x_1, x_2, x_3, x_4) = \nabla_\theta^T O_{3,0}^{(0)}(x_1, x_2, x_3) \nabla_\theta f_0^{(0)}(x_4), \end{equation} where we have ignored the initial conditions for the time being. Integrating this system is straightforward: \begin{equation} f_{t}^{(0)}(\vec x) = \vec y + e^{-O_{2,0}^{(0)}(\vec x, \vec x) t} (f_0^{(0)}(\vec x) - \vec y), \end{equation} For brevity, let us introduce the following definition: \begin{equation} \Delta f_t^{(0)}(x) = e^{-O_{2,0}^{(0)}(x, \vec x) t} (f_0^{(0)}(\vec x) - \vec y). \end{equation} This gives: \begin{equation} O_{3,t}^{(1)}(x_1, x_2, x_3) = O_{3,0}^{(1)}(x_1, x_2, x_3) - \int_{0}^t O_{4,0}^{(1)}(x_1, x_2, x_3, \vec x) \Delta f_{t'}^{(0)}(\vec x) \, dt'. \end{equation} \begin{multline} O_{2,t}^{(1)}(x_1, x_2) = O_{2,0}^{(1)}(x_1, x_2) - \int_{0}^t O_{3,0}^{(1)}(x_1, x_2, \vec x) \Delta f_{t'}^{(0)}(\vec x) \, dt' +\\+ \int_{0}^{t} \int_{0}^{t''} \Delta f_{t''}^{(0),T}(\vec x) O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') \Delta f_{t'}^{(0)}(\vec x') \, dt' \, dt''. \end{multline} Let us elaborate the terms: \begin{equation} \int_{0}^t O_{3,0}^{(1)}(x_1, x_2, \vec x) \Delta f_{t'}^{(0)}(\vec x) \, dt' = O_{3,0}^{(1)}(x_1, x_2, \vec x) \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} \left(I - e^{-O_{2,0}^{(0)}(\vec x, \vec x) t}\right) (f_0^{(0)}(\vec x) - \vec y). \end{equation} \begin{multline} \int_{0}^{t} \int_{0}^{t''} \Delta f_{t''}^{(0),T}(\vec x) O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') \Delta f_{t'}^{(0)}(\vec x') \, dt' \, dt'' =\\= \int_{0}^{t} (f_0^{(0)}(\vec x) - \vec y)^T \left(I - e^{-O_{2,0}^{(0)}(\vec x, \vec x) t'}\right) \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') e^{-O_{2,0}^{(0)}(\vec x, \vec x) t'} (f_0^{(0)}(\vec x) - \vec y) \, dt' =\\= (f_0^{(0)}(\vec x) - \vec y)^T \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} \left(I - e^{-O_{2,0}^{(0)}(\vec x, \vec x) t}\right) (f_0^{(0)}(\vec x) - \vec y) -\\- \int_{0}^{t} (f_0^{(0)}(\vec x) - \vec y)^T e^{-O_{2,0}^{(0)}(\vec x, \vec x) t'} \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') e^{-O_{2,0}^{(0)}(\vec x, \vec x) t'} (f_0^{(0)}(\vec x) - \vec y) \, dt'. \end{multline} Consider the eigenvalue-eigenvector decomposition of $O_{2,0}^{(0)}(\vec x, \vec x)$: $O_{2,0}^{(0)}(\vec x, \vec x) = \sum_{k=1}^m \lambda_1 v_k v_k^T$. This helps us integrating the last term: \begin{multline} \int_{0}^{t} (f_0^{(0)}(\vec x) - \vec y)^T e^{-O_{2,0}^{(0)}(\vec x, \vec x) t'} \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') e^{-O_{2,0}^{(0)}(\vec x, \vec x) t'} (f_0^{(0)}(\vec x) - \vec y) \, dt' =\\= \sum_{k,l=1}^m \int_{0}^{t} e^{-(\lambda_k+\lambda_l) t'} (f_0^{(0)}(\vec x) - \vec y)^T v_k v_k^T \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') v_l v_l^T (f_0^{(0)}(\vec x) - \vec y) \, dt' =\\= \sum_{k,l=1}^m \frac{1}{\lambda_k+\lambda_l} \left(1 - e^{-(\lambda_k+\lambda_l) t}\right) (f_0^{(0)}(\vec x) - \vec y)^T v_k v_k^T \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') v_l v_l^T (f_0^{(0)}(\vec x) - \vec y). \end{multline} Recall $\hat\Theta_t(x_1,x_2) = O_{2,t}(x_1,x_2) = O_{2,t}^{(0)}(x_1,x_2) + n^{-1} O_{2,t}^{(1)}(x_1,x_2) + O(n^{-2})$. The first term (the limit NTK) does not depend on $t$, $O_{2,t}^{(0)}(x_1,x_2) = O_{2,0}^{(0)}(x_1,x_2) = \Theta(x_1,x_2)$, while the second one (the correction) does. Note that computing the second term invokes $O_{4,0}^{(1)}$, the fourth-order tensor, therefore approaching it directly requires $O(m^4)$ memory. Integrating the above system further gives the first-order correction for the limit model $f_t^{(1)}$. As we shall see in \cref{sec:beyond}, the kernel $\Theta^{NTH}(x_1,x_2) = O_{2,0}^{(0)}(x_1,x_2) + n^{-1} \mathbb{E}\, O_{2,\infty}^{(1)}(x_1,x_2)$ can be considered as a label-aware alternative to the usual NTK $\Theta(x_1,x_2) = O_{2,0}^{(0)}(x_1,x_2)$. Let us write its explicit definition and refer it later in \cref{sec:beyond}: \begin{multline} \Theta^{NTH}(x_1,x_2) = O_{2,0}^{(0)}(x_1,x_2) + n^{-1} \mathbb{E}\, O_{2,\infty}^{(1)}(x_1,x_2) =\\= \Theta(x_1,x_2) + n^{-1} \mathbb{E}\,\left[O_{2,0}^{(1)}(x_1,x_2)\right] - n^{-1} \mathbb{E}\,\left[O_{3,0}^{(1)}(x_1, x_2, \vec x) \Theta^{-1}(\vec x,\vec x) f_0^{(0)}(\vec x)\right] +\\+ n^{-1} \vec y^T \Theta^{-1}(\vec x,\vec x) \mathbb{E}\,\left[O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x)\right] \Theta^{-1}(\vec x,\vec x) \vec y +\\+ n^{-1} \mathbb{E}\,\left[f_0^{(0),T}(\vec x) \Theta^{-1}(\vec x,\vec x) O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x) \Theta^{-1}(\vec x,\vec x) f_0^{(0)}(\vec x)\right] -\\- n^{-1} \sum_{k,l=1}^m \frac{1}{\lambda_k (\lambda_k+\lambda_l)} \vec y^T \vec v_k \vec v_k^T \mathbb{E}\,\left[O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x)\right] \vec v_l \vec v_l^T \vec y -\\- n^{-1} \sum_{k,l=1}^m \frac{1}{\lambda_k (\lambda_k+\lambda_l)} \mathbb{E}\,\left[f_0^{(0),T}(\vec x) \vec v_k \vec v_k^T O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x) \vec v_l \vec v_l^T f_0^{(0)}(\vec x)\right]. \label{eq:lantk_nth} \end{multline} While the above result is valid under a conjecture, \cref{conj:C_asymptotics}, it can be proven rigorously, see \cite{huang2019dynamics}. \section{Computing the limit kernel} \label{sec:limit} It is not obvious how to compute the limit kernel $\Theta$ predicted by the theorems discussed in \cref{sec:convergence}. Fortunately, one can compute the limit kernel exactly for certain classes of models. \subsection{Fully-connected nets} \label{sec:limit_fc_nets} Consider an $L$-layer fully-connected network under NTK parameterization: \begin{equation} f(x) = h_L(x), \quad h_l(x) = \frac{1}{\sqrt{n_{l-1}}} W_l x_{l-1}(x), \quad x_{l-1}(x) = \phi(h_{l-1}(x)), \quad x_0(x) = x, \end{equation} where $W_l \in \mathbb{R}^{n_l \times n_{l-1}}$ $\forall l \in [L]$. For simplicity, we assume $n_L=1$, i.e. the output is scalar. Since we already know (see \cref{sec:convergence}) that the kernel does not depend on $t$ under NTK parameterization, we consider the case $t=0$ only and omit the $t$-subscript. The empirical NTK is given by \begin{equation} \hat\Theta(x,x') = \nabla^T_\theta f(x;\theta) \nabla_\theta f(x';\theta) = \sum_{l=1}^L \tr\left(\nabla^T_{W_l} f(x;W_{1:L}) \nabla_{W_l} f(x;W_{1:L})\right). \end{equation} By chain rule, \begin{equation} \nabla_{W_l} f(x) = \sum_{i=1}^{n_l} \partial_{h_l^i} f(x) \nabla_{W_l} h_l^i(x) = \frac{1}{\sqrt{n_{l-1}}} \sum_{i=1}^{n_l} \sum_{j=1}^{n_{l-1}} \partial_{h_l^i} f(x) E_{ij} x_{l-1}^j(x) = \frac{1}{\sqrt{n_{l-1}}} \nabla_{h_l} f(x) x_{l-1}^T(x). \end{equation} Therefore, \begin{equation} \hat\Theta(x,x') = \sum_{l=1}^L \tr\left(\nabla^T_{W_l} f(x) \nabla_{W_l} f(x)\right) = \sum_{l=1}^L \frac{1}{n_{l-1}} \left(\nabla^T_{h_l} f(x') \nabla_{h_l} f(x)\right) \times \left(x_{l-1}^T(x) x_{l-1}(x')\right). \end{equation} If $x_{l-1}$ had iid components with zero mean, $\frac{1}{n_{l-1}} x_{l-1}^T(x) x_{l-1}(x')$ would be an empirical covariance estimated with $n_{l-1}$ samples. In fact, when all weights are iid standard Gaussians, components of $h_{l-1}$ become iid Gaussian with zero mean as $n_{1:l-2} \to \infty$ sequentially. Hence their images under elementwise maps $\phi$ are also iid. Proof by induction. $h_1(x) = \frac{1}{\sqrt{n_0}} W_1 x$ has iid Gaussian components with zero mean and variance $q_1(x) = x^T x$. Suppose components of $h_{l-1}(x)$ become iid Gaussian with zero mean and $q_{l-1}(x)$ variance as $n_{1:l-2} \to \infty$ sequentially. Then $h_l(x) = \frac{1}{\sqrt{n_{l-1}}} W_l \phi(h_{l-1}(x))$ converges (in distribution) to a vector of Gaussians with zero mean and variance $q_l(x) = \mathbb{E}\,_{z \sim \mathcal{N}(0,q_{l-1}(x))} \phi^2(z)$ as $n_{1:l-1} \to \infty$ sequentially by the Central Limit Theorem (CLT). One can easily generalize the above proof to any finite set of inputs. In particular, $[h_l^i(x),h_l^i(x')]^T$ converges to a Gaussian with zero mean and covariance $\Sigma_l(x,x') = \begin{pmatrix} q_l(x) & q_l(x,x')\\q_l(x,x') & q_l(x') \end{pmatrix}$, where $q_l(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \phi(z) \phi(z')$. Hence as $n_{1:l-2} \to \infty$ sequentially, $\frac{1}{n_{l-1}} x_{l-1}^T(x) x_{l-1}(x')$ converges to $q_l(x,x')$. Let $g_l(x) = \sqrt{n_l} \nabla_{h_l} f(x)$. Since \begin{equation} \nabla_{h_l^j} f(x) = \sum_{i=1}^{n_{l+1}} \nabla_{h_{l+1}^i} f(x) \nabla_{h_l^j} h_{l+1}^i(x) = \frac{1}{\sqrt{n_l}} \sum_{i=1}^{n_{l+1}} \nabla_{h_{l+1}^i} f(x) W_{l+1}^{ij} \phi'(h_l^j(x)), \end{equation} we have $g_l(x) = \frac{1}{\sqrt{n_{l+1}}} D_l(x) W_{l+1}^T g_{l+1}(x)$, where $D_l(x) = \diag(\phi'(h_l(x)))$. There are two obstacles that prevent us from following the same lines for $g_l$ as for $h_l$. First, $g_{l+1}$ depends on $D_{l+1}$ that depends on $h_{l+1}$ that depends on $W_{l+1}$. Since $W_{l+1}$ and $g_{l+1}$ are dependent, we cannot guarantee that components of $g_l$ become iid. Second, we know the distribution of $h_l$ as all the layers from the input side become infinitely wide sequentially, while induction for $g_l$ should be performed starting from the head. Nevertheless, it can be proven rigorously that ignoring these two obstacles still lead to a correct result \cite{yang2020tensor_ii}: $g_l(x)$ converges to a vector of iid Gaussians with zero mean and variance $\dot q_l(x) = \dot q_{l+1}(x) \mathbb{E}\,_{z \sim \mathcal{N}(0,q_l(x))} (\phi')^2(z)$ as $n_{1:L-1} \to \infty$. A similar result holds for a pair of inputs: $[g_l^i(x),g_l^i(x')]^T$ converges to a Gaussian with zero mean and covariance $\dot\Sigma_l(x,x') = \begin{pmatrix} \dot q_l(x) & \dot q_l(x,x')\\\dot q_l(x,x') & \dot q_l(x') \end{pmatrix}$, where $\dot q_l(x,x') = \dot q_{l+1}(x,x') \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_l(x,x'))} \phi'(z) \phi'(z')$. Hence $\nabla^T_{h_l} f(x') \nabla_{h_l} f(x) = \frac{1}{n_l} g_l^T(x') g_l(x)$ converges to $\dot q_l(x,x')$. Putting all together, $\hat\Theta(x,x')$ converges to $\Theta(x,x') = \sum_{l=1}^L \dot q_l(x,x') q_l(x,x')$, where \begin{equation} q_1(x,x') = x^T x', \quad q_l(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \phi(z) \phi(z'), \end{equation} \begin{equation} \dot q_L(x,x') = 1, \quad \dot q_l(x,x') = \dot q_{l+1}(x,x') \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_l(x,x'))} \phi'(z) \phi'(z'), \end{equation} and $\Sigma_l(x,x') = \begin{pmatrix} q_l(x,x) & q_l(x,x')\\q_l(x,x') & q_l(x',x') \end{pmatrix}$. Note that the Master theorem of \cite{yang2020tensor_ii} gives similar recurrent formulas for NTK of any architecture expressible by a tensor program and makes them mathematically rigorous. In fact, computing the NTK can be performed in a convenient sequential layer-wise manner, as implemented in Neural Tangents\footnote{\url{https://github.com/google/neural-tangents}} \cite{novak2019neural}. Define the NTK for the first $l$ layers as $\Theta_{:l}(x,x') = \sum_{l'=1}^l \tr(\nabla_{W_{l'}}^T h_l^i(x) \nabla_{W_{l'}} h_l^i(x'))$; in this case $\Theta_{:L}(x,x') = \Theta(x,x')$. Suppose $\Theta_{:l-1}(x,x')$ and $q_{l-1}(x,x')$ are already computed. Adding a nonlinearity and a linear layer with weights $W_l$ gives $q_l$ as listed above: \begin{equation} q_l(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \phi(z) \phi(z'), \quad \text{where $\Sigma_{l-1}(x,x') = \begin{pmatrix} q_{l-1}(x,x) & q_{l-1}(x,x')\\q_{l-1}(x,x') & q_{l-1}(x',x') \end{pmatrix}$.} \label{eq:q_iteration} \end{equation} However, according to a formula above, $\dot q_l$ is computed using $\dot q_{l+1}$, which requires a sequential layer-wise "forward pass" to compute all $q_l$ and a "backward pass" to compute $\dot q_l$. In fact, one forward pass is enough: \begin{multline} \Theta_{:l}(x,x') = \sum_{l'=1}^l \tr(\nabla_{W_{l'}}^T h_l^i(x) \nabla_{W_{l'}} h_l^i(x')) = q_l(x,x') + \sum_{l'=1}^{l-1} \tr(\nabla_{W_{l'}}^T h_l^i(x) \nabla_{W_{l'}} h_l^i(x')) =\\= q_l(x,x') + \Theta_{:l-1}(x,x') \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \phi'(z) \phi'(z'). \label{eq:Theta_iteration} \end{multline} In Neural Tangents, each operation in a neural network is mapped to a corresponding kernel transform. \subsection{Convolutional nets} The same idea can be applied for convolutional nets as well. Consider 1d-convolutions for simplicity. In this case, we are dealing with 1d "images" with $d$ pixels: $x \in \mathbb{R}^{n_0 \times d}$. Consider a network with $L$ convolutions under NTK parameterization and an average pooling at the end: \begin{equation} f^i = \frac{1}{d} \sum_{s=1}^d x_L^{i,s}, \quad h_l^{i,s} = \frac{1}{\sqrt{n_{l-1}}} \sum_{j=1}^{n_{l-1}} \sum_{r \in \ker} W_l^{ijr} x_{l-1}^{j,s+r}, \quad x_{l-1}^{i,s} = \phi(h_{l-1}^{i,s}), \quad x_0^{i,s} = x^{i,s}, \end{equation} where we omitted the argument $x$ for brevity, $W_l \in \mathbb{R}^{n_l \times n_{l-1} \times |\ker|}$ with $W_l^{ijr} \sim \mathcal{N}(0,1)$ iid $\forall l \in [L]$, and $\ker$ denotes the convolution filter; e.g. $\ker = [-1,0,1]$ for a convolution of size $3$. For simplicity, we assume $n_L=1$, i.e. the output is scalar. As before, the empirical NTK is given as \begin{equation} \hat\Theta(x,x') = \nabla^T_\theta f(x;\theta) \nabla_\theta f(x';\theta) = \sum_{l=1}^L \sum_{i=1}^{n_l} \sum_{j=1}^{n_{l-1}} \sum_{r \in \ker} \partial_{W_l^{ijr}} f(x) \partial_{W_l^{ijr}} f(x'). \end{equation} By chain rule, \begin{equation} \partial_{W_l^{ijr}} f = \sum_{s=1}^d \partial_{h_l^{i,s}} f \partial_{W_l^{ijr}} h_l^{i,s} = \frac{1}{\sqrt{n_{l-1}}} \sum_{s=1}^{d} \partial_{h_l^{i,s}} f x_{l-1}^{j,s+r}. \end{equation} Therefore, \begin{equation} \hat\Theta(x,x') = \sum_{l=1}^L \frac{1}{n_{l-1}} \sum_{i=1}^{n_l} \sum_{j=1}^{n_{l-1}} \sum_{r \in \ker} \sum_{s,s'=1}^{d} \partial_{h_l^{i,s}} f(x) \partial_{h_l^{i,s'}} f(x') x_{l-1}^{j,s+r}(x) x_{l-1}^{j,s'+r}(x'). \end{equation} As for the fully-connected case, we are going to prove that $h^{i,s}$ become Gaussian with zero mean and variance given by a certain recurrent formula as $n_{1:l-1} \to \infty$ sequentially. However for the convolutional case, not all $h^{i,s}$ become independent: they become independent for different $i$'s but not for different $s$. Let us induct on $l$. $h_1^{i,s} = \frac{1}{\sqrt{n_0}} \sum_{j=1}^{n_0} \sum_{r \in \ker} W_1^{ijr} x^{j,s+r}$ are independent for any two different $i$'s. For a fixed $i$, $h_1^{i,\cdot}$ is a Gaussian vector with zero mean and covariance $q_1^{s,s'} = \frac{1}{n_0} \sum_{j=1}^{n_0} \sum_{r \in \ker} x^{j,s+r} x^{j,s'+r}$. Suppose $h_{l-1}^{i,s}$ becomes Gaussian with zero mean, independent for any two different $i$'s, and $q_{l-1}^{s,s'}$ is its covariance as $n_{1:l-2} \to \infty$ sequentially. Then $h_l^{i,s} = \frac{1}{\sqrt{n_{l-1}}} \sum_{j=1}^{n_{l-1}} \sum_{r \in \ker} W_l^{ijr} x_{l-1}^{j,s+r}$ converges (in distribution) to a random variable with similar properties but with covariance $q_l^{s,s'} = \mathbb{E}\,_{z \sim \mathcal{N}(0,q_{l-1})} \sum_{r \in \ker} \phi(z^{s+r}) \phi(z^{s'+r})$ as $n_{1:l-1} \to \infty$ sequentially by the Central Limit Theorem (CLT). One can easily generalize the above proof to any finite set of inputs. In particular, $[h_l^{i,\cdot}(x),h_l^{i,\cdot}(x')]^T \in \mathbb{R}^{2d}$ converges to a Gaussian with zero mean and covariance $\Sigma_l(x,x') = \begin{pmatrix} q_l(x) & q_l(x,x')\\q_l(x,x') & q_l(x') \end{pmatrix} \in \mathbb{R}^{2d \times 2d}$, where $q_l^{s,s'}(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \sum_{r \in \ker} \phi(z^{s+r}) \phi(z^{\prime,s'+r})$. Hence as $n_{1:l-2} \to \infty$ sequentially, $\frac{1}{n_{l-1}} \sum_{j=1}^{n_{l-1}} \sum_{r \in \ker} x_{l-1}^{j,s+r}(x) x_{l-1}^{j,s'+r}(x')$ converges to $q_l^{s,s'}(x,x')$. Let $g_l^{j,p} = \sqrt{n_l} \nabla_{h_l^{j,p}} f$. Since \begin{multline} \partial_{h_l^{j,p}} f = \sum_{i=1}^{n_{l+1}} \sum_{s=1}^d \partial_{h_{l+1}^{i,s}} f \partial_{h_l^{j,p}} h_{l+1}^{i,s} =\\= \frac{1}{\sqrt{n_l}} \sum_{i=1}^{n_{l+1}} \sum_{s=1}^d \partial_{h_{l+1}^{i,s}} f \sum_{r \in \ker} W_{l+1}^{ijr} 1_{s+r=p} \phi'(h_l^{j,p}) = \frac{1}{\sqrt{n_l}} \sum_{i=1}^{n_{l+1}} \sum_{r \in \ker} \partial_{h_{l+1}^{i,p-r}} f W_{l+1}^{ijr} \phi'(h_l^{j,p}), \end{multline} $\partial_{h_L^{j,p}} f = \frac{1}{d} \phi'(h_L^{j,p})$, and $n_L=1$, we have \begin{equation} g_L^{j,p} = \frac{1}{d} \phi'(h_L^{j,p}), \quad g_l^{j,p} = \frac{1}{\sqrt{n_{l+1}}} \sum_{i=1}^{n_{l+1}} \sum_{r \in \ker} g_{l+1}^{i,p-r} W_{l+1}^{ijr} \phi'(h_l^{j,p}). \end{equation} With the same correctness remark as for convolutional nets, it is possible to show that $g_l^{j,p}$ become independent for different $j$'s and $g_l^{j,\cdot}$ become Gaussian with covariance $\dot q_l^{p,p'}$ as $n_{1:L-1} \to \infty$. Covariance is given by the following recurrence: $\dot q_L^{p,p'} = \frac{1}{d^2} \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_L)} \phi'(z^p) \phi'(z^{p'})$, $\dot q_l^{p,p'} = \mathbb{E}\,_{z \sim \mathcal{N}(0,q_l)} \phi'(z^{p}) \phi'(z^{p'}) \sum_{r \in \ker} \dot q_{l+1}^{p-r,p'-r}$. A similar result holds for a pair of inputs: $[g_l^{i,\cdot}(x),g_l^{i,\cdot}(x')]^T \in \mathbb{R}^{2d}$ converges to a Gaussian with zero mean and covariance $\dot\Sigma_l(x,x') = \begin{pmatrix} \dot q_l(x) & \dot q_l(x,x')\\\dot q_l(x,x') & \dot q_l(x') \end{pmatrix} \in \mathbb{R}^{2d \times 2d}$, where $\dot q_l^{s,s'}(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_l(x,x'))} \phi'(z^{s}) \phi'(z^{\prime,s'}) \sum_{r \in \ker} \dot q_{l+1}^{s-r,s'-r}(x,x')$. Hence \begin{equation} \sum_{i=1}^{n_l} \partial_{h_l^{i,s}} f(x) \partial_{h_l^{i,s'}} f(x') = \frac{1}{n_l} \sum_{i=1}^{n_l} g_l^{i,s}(x) g_l^{i,s'}(x') \to \dot q_l^{s,s'}(x,x'). \end{equation} Putting all together, $\hat\Theta(x,x')$ converges to $\Theta(x,x') = \sum_{l=1}^L \sum_{s,s'=1}^d \dot q_l^{s,s'}(x,x') q_l^{s,s'}(x,x')$, where \begin{equation} q_1^{s,s'}(x,x') = \frac{1}{n_0} \sum_{j=1}^{n_0} \sum_{r \in \ker} x^{j,s+r} x^{\prime,j,s'+r}, \end{equation} \begin{equation} q_l^{s,s'}(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \sum_{r \in \ker} \phi(z^{s+r}) \phi(z^{\prime,s'+r}), \end{equation} \begin{equation} \dot q_L^{s,s'}(x,x') = \frac{1}{d^2} \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_L(x,x'))} \phi'(z^s) \phi'(z^{\prime,s'}), \end{equation} \begin{equation} \dot q_l^{s,s'}(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_l(x,x'))} \phi'(z^{s}) \phi'(z^{\prime,s'}) \sum_{r \in \ker} \dot q_{l+1}^{s-r,s'-r}(x,x'), \end{equation} and $\Sigma_l(x,x') = \begin{pmatrix} q_l(x,x) & q_l(x,x')\\q_l(x,x') & q_l(x',x') \end{pmatrix}$. Same as for fully-connected nets, computing the NTK can be performed in a convenient sequential layer-wise manner. Define the empirical NTK for the first $l$ layers as \begin{equation} \hat\Theta_{:l}^{s,s'}(x,x') = \sum_{l'=1}^l \sum_{i=1}^{n_{l'}} \sum_{j=1}^{n_{l'-1}} \sum_{r \in \ker} \partial_{W_{l'}^{ijr}} h_l^{1,s}(x) \partial_{W_{l'}^{ijr}} h_l^{1,s'}(x'); \end{equation} in this case, by chain rule, \begin{multline} \hat\Theta(x,x') = \sum_{l=1}^L \sum_{i=1}^{n_l} \sum_{j=1}^{n_{l-1}} \sum_{r \in \ker} \partial_{W_l^{ijr}} f(x) \partial_{W_l^{ijr}} f(x') =\\= \sum_{l=1}^L \sum_{i=1}^{n_l} \sum_{j=1}^{n_{l-1}} \sum_{s,s'=1}^d \sum_{r \in \ker} \partial_{W_l^{ijr}} h_L^{1,s}(x) \partial_{W_l^{ijr}} h_L^{1,s'}(x') \partial_{h_L^{1,s}} f(x) \partial_{h_L^{1,s'}} f(x') =\\= \frac{1}{d^2} \sum_{s,s'=1}^d \phi'(h_L^{1,s}(x)) \phi'(h_L^{1,s'}(x')) \hat\Theta_{:L}^{s,s'}(x,x'), \end{multline} and therefore, \begin{equation} \Theta(x,x') = \frac{1}{d^2} \sum_{s,s'=1}^d \dot q_L^{s,s'}(x,x') \Theta_{:L}^{s,s'}(x,x'). \end{equation} Suppose $\hat\Theta_{:l-1}(x,x')$ and $q_{l-1}(x,x')$ are already computed. Adding a nonlinearity and a convolutional layer with weights $W_l$ gives $q_l$ as listed above: \begin{equation} q_l^{s,s'}(x,x') = \mathbb{E}\,_{[z,z'] \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \sum_{r \in \ker} \phi(z^{s+r}) \phi(z^{\prime,s'+r}), \label{eq:q_iteration_conv} \end{equation} where $\Sigma_{l-1}(x,x') = \begin{pmatrix} q_{l-1}(x,x) & q_{l-1}(x,x')\\q_{l-1}(x,x') & q_{l-1}(x',x') \end{pmatrix}$. We can compute $\hat\Theta_{:L}$ in a single forward pass using the following recurrence: \begin{multline} \hat\Theta_{:l}^{s,s'}(x,x') = \sum_{l'=1}^l \sum_{i=1}^{n_{l'}} \sum_{j=1}^{n_{l'-1}} \sum_{\tilde r \in \ker} \partial_{W_{l'}^{ij\tilde r}} h_l^{1,s}(x) \partial_{W_{l'}^{ij\tilde r}} h_l^{1,s'}(x') =\\= \frac{1}{n_{l-1}} \sum_{j=1}^{n_{l-1}} \sum_{\tilde r \in \ker} x^{j,s+\tilde r}(x) x^{j,s'+\tilde r}(x') +\\+ \sum_{l'=1}^{l-1} \sum_{i=1}^{n_{l'}} \sum_{j=1}^{n_{l'-1}} \sum_{\tilde r \in \ker} \sum_{k,k'=1}^{n_{l-1}} \sum_{p,p'=1}^d \partial_{W_{l'}^{ij\tilde r}} h_{l-1}^{k,p}(x) \partial_{W_{l'}^{ij\tilde r}} h_{l-1}^{k',p'}(x') \partial_{h_{l-1}^{k,p}} h_l^{1,s}(x) \partial_{h_{l-1}^{k',p'}} h_l^{1,s'}(x') =\\= \frac{1}{n_{l-1}} \sum_{j=1}^{n_{l-1}} \sum_{\tilde r \in \ker} x^{j,s+\tilde r}(x) x^{j,s'+\tilde r}(x') +\\+ \frac{1}{n_{l-1}} \sum_{l'=1}^{l-1} \sum_{i=1}^{n_{l'}} \sum_{j=1}^{n_{l'-1}} \sum_{\tilde r,r,r' \in \ker} \sum_{k,k'=1}^{n_{l-1}} \partial_{W_{l'}^{ij\tilde r}} h_{l-1}^{k,s+r}(x) \partial_{W_{l'}^{ij\tilde r}} h_{l-1}^{k',s'+r'}(x') \times\\\times W_l^{1kr} \phi'(h_{l-1}^{k,s+r}(x)) W_l^{1k'r'} \phi'(h_{l-1}^{k',s'+r'}(x')). \label{eq:Theta_iteration_conv} \end{multline} A limit then gives \begin{equation} \Theta_{:l}^{s,s'}(x,x') = q_l^{s,s'}(x,x') + \sum_{r,r' \in \ker} \Theta_{:l-1}^{s+r,s'+r'}(x,x') \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \phi'(z^{s+r}) \phi'(z^{\prime,s'+r'}), \end{equation} which resembles the corresponding result for fully-connected nets when $\ker = [0]$. \subsection{Computing the expectations} The only obstacle that prevents explicit computation here is expectations over $[z,z']^T \sim \mathcal{N}(0,\Sigma_l(x,x'))$. Fortunately, these expectations can be computed analytically for certain $\phi$: in particular, for ReLU and the error function. We cover only the case of ReLU here as it is more widely used in practice. Let us omit the $l$-subscript and the arguments $(x,x')$ for brevity: $\Sigma = \begin{pmatrix} q_{11} & q_{12}\\q_{12} & q_{22} \end{pmatrix}$, and we are interested in $\mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} [u]_+ [v]_+$ and $\mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} 1_{u>0} 1_{v>0}$. Following \cite{arora2019exact}, we start with assuming $q_{11} = q_{22} = 1$ and $q_{12} = \lambda$; $\Sigma \geq 0$ implies $|\lambda| \leq 1$. Then \begin{multline} \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} [u]_+ [v]_+ = \mathbb{E}\,_{[u,\tilde v]^T \sim \mathcal{N}(0,I)} [u]_+ \left[\lambda u + \sqrt{1-\lambda^2} \tilde v\right]_+ =\\= \mathbb{E}\,_{u \sim \mathcal{N}(0,1)} \left([u]_+ \int_{-\frac{\lambda}{\sqrt{1-\lambda^2}} u}^\infty \left(\lambda u + \sqrt{1-\lambda^2} \tilde v\right) \frac{1}{\sqrt{2\pi}} e^{-\tilde v^2/2} \, d\tilde v \right) =\\= \mathbb{E}\,_{u \sim \mathcal{N}(0,1)} \left( [u]_+ \left( \lambda u \frac{1}{2} \left(1 - \erf\left(-\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right)\right) + \sqrt{\frac{1-\lambda^2}{2\pi}} e^{-\frac{\lambda^2}{2-2\lambda^2} u^2} \right) \right) =\\= \int_0^\infty u \left(\lambda u \frac{1}{2} \left(1 - \erf\left(-\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right)\right) + \sqrt{\frac{1-\lambda^2}{2\pi}} e^{-\frac{\lambda^2}{2-2\lambda^2} u^2}\right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du =\\= \frac{\lambda}{4} + \int_0^\infty u \left(\lambda u \frac{1}{2} \erf\left(\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right) + \sqrt{\frac{1-\lambda^2}{2\pi}} e^{-\frac{\lambda^2}{2-2\lambda^2} u^2}\right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du =\\= \frac{\lambda}{4} + \frac{\lambda}{2} A + \sqrt{\frac{1-\lambda^2}{2\pi}} B. \end{multline} \begin{multline} A = \int_0^\infty u^2 \erf\left(\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du = -\int_0^\infty u \erf\left(\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right) \frac{1}{\sqrt{2\pi}} \, d\left(e^{-u^2/2}\right) =\\= \int_0^\infty \left(\erf\left(\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right) + u \frac{\lambda}{\sqrt{2-2\lambda^2}} \frac{2}{\sqrt{\pi}} e^{-\frac{\lambda^2}{2-2\lambda^2} u^2} \right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du = C + \frac{\lambda}{\sqrt{2-2\lambda^2}} \frac{2}{\sqrt{\pi}} B. \end{multline} \begin{equation} C = \int_0^\infty \erf\left(\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du = \frac{1}{\pi} \arctan\left(\frac{\lambda}{\sqrt{1-\lambda^2}}\right) = \frac{1}{\pi} \arcsin\lambda. \end{equation} \begin{equation} B = \int_0^\infty u e^{-\frac{\lambda^2}{2-2\lambda^2} u^2} \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du = \frac{1}{\sqrt{2\pi}}\int_0^\infty u e^{-\frac{1}{2-2\lambda^2} u^2} \, du = \frac{1-\lambda^2}{\sqrt{2\pi}}. \end{equation} Putting all together, \begin{multline} \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} [u]_+ [v]_+ = \frac{\lambda}{4} + \frac{\lambda}{2} A + \sqrt{\frac{1-\lambda^2}{2\pi}} B = \frac{\lambda}{4} + \frac{\lambda}{2} C + \frac{\lambda^2}{\sqrt{1-\lambda^2}} \frac{1}{\sqrt{2\pi}} B + \sqrt{\frac{1-\lambda^2}{2\pi}} B =\\= \frac{\lambda}{4} + \frac{\lambda}{2} C + \frac{1}{\sqrt{1-\lambda^2}} \frac{1}{\sqrt{2\pi}} B = \frac{\lambda}{4} + \frac{\lambda}{2\pi} \arcsin\lambda + \frac{\sqrt{1-\lambda^2}}{2\pi} =\\= \frac{\lambda\left(\frac{\pi}{2} + \arcsin\lambda\right) + \sqrt{1-\lambda^2}}{2\pi} = \frac{\lambda\left(\pi - \arccos\lambda\right) + \sqrt{1-\lambda^2}}{2\pi}. \end{multline} And for the second quantity, \begin{multline} \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} 1_{u>0} 1_{v>0} = \mathbb{E}\,_{[u,\tilde v]^T \sim \mathcal{N}(0,I)} 1_{u>0} 1_{\lambda u + \sqrt{1-\lambda^2} \tilde v > 0} =\\= \mathbb{E}\,_{u \sim \mathcal{N}(0,1)} \left(1_{u>0} \int_{-\frac{\lambda}{\sqrt{1-\lambda^2}} u}^\infty \frac{1}{\sqrt{2\pi}} e^{-\tilde v^2/2} \, d\tilde v \right) =\\= \mathbb{E}\,_{u \sim \mathcal{N}(0,1)} \left( 1_{u>0} \frac{1}{2} \left(1 - \erf\left(-\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right)\right) \right) =\\= \int_0^\infty \frac{1}{2} \left(1 - \erf\left(-\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right)\right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du =\\= \frac{1}{4} + \int_0^\infty \frac{1}{2} \erf\left(\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du =\\= \frac{1}{4} + \frac{1}{2} C = \frac{\frac{\pi}{2} + \arcsin\lambda}{2\pi} = \frac{\pi - \arccos\lambda}{2\pi}. \end{multline} A general positive semi-definite matrix $\Sigma$ can be expressed as $\Sigma = D \Lambda D$, where $\Lambda = \begin{pmatrix} 1 & \lambda\\\lambda & 1 \end{pmatrix}$, $D = \begin{pmatrix} \sqrt{q_{11}} & 0\\0 & \sqrt{q_{22}} \end{pmatrix}$, and $\lambda = \frac{q_{12}}{\sqrt{q_{11} q_{22}}}$. Then, using homogeneity of ReLU, \begin{multline} \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} [u]_+ [v]_+ = \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,D \Lambda D)} [u]_+ [v]_+ = \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Lambda)} [\sqrt{q_{11}} u]_+ [\sqrt{q_{22}} v]_+ =\\= \sqrt{q_{11} q_{22}} \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Lambda)} [u]_+ [v]_+ = \sqrt{q_{11} q_{22}} \frac{\lambda\left(\pi - \arccos\left(\frac{q_{12}}{\sqrt{q_{11} q_{22}}}\right)\right) + \sqrt{1-\frac{q_{12}^2}{q_{11} q_{22}}}}{2\pi} =\\= \frac{\lambda \sqrt{q_{11} q_{22}} \left(\pi - \arccos\left(\frac{q_{12}}{\sqrt{q_{11} q_{22}}}\right)\right) + \sqrt{q_{11} q_{22} - q_{12}^2}}{2\pi}. \end{multline} \begin{multline} \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} 1_{u>0} 1_{v>0} = \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,D \Lambda D)} 1_{u>0} 1_{v>0} =\\= \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Lambda)} 1_{u>0} 1_{v>0} = \frac{\pi - \arccos\left(\frac{q_{12}}{\sqrt{q_{11} q_{22}}}\right)}{2\pi}. \end{multline} Similar explicit computations are available for convolutional networks \cite{arora2019exact}, as well as for generic tensor programs, as long as the nonlinearities used belong to a certain list (which includes e.g. ReLU and the error function, see \cite{novak2019neural} for a concrete implementation and \cite{yang2020tensor_ii} for generic recurrent formulas in terms of expectations). However, a typical convolutional network also uses max poolings and other nonlinear maps for which explicit formulas for expectations are not available at the moment. In this case, one can rely on a finite-width Monte-Carlo estimate for $\Theta(x,x')$, i.e. $\hat\Theta^{(M)}(x,x') = \frac{1}{M} \sum_{k=1}^M \hat\Theta(x,x')$, where $M$ is a number of independent initializations and $\hat\Theta(x,x')$ is an empirical kernel for width $n$. According to convergence results, $\hat\Theta^{(M)}(x,x') \to \Theta(x,x')$ as $n \to \infty$ $\forall M \geq 1$. Also, $\hat\Theta^{(M)}(x,x') \to \mathbb{E}\, \hat\Theta(x,x')$ as $M \to \infty$ $\forall n \to \infty$. Unfortunately, one cannot guarantee that $\mathbb{E}\, \hat\Theta(x,x') = \Theta(x,x')$; therefore, $\hat\Theta^{(M)}(x,x')$ can be a biased estimate. However, according to experiments of \cite{novak2019neural}, discrepancy between $\hat\Theta^{(M)}$ and $\Theta$ decreases as $M$ grows for any finite $n$. This means that the main component of this discrepancy is not bias but variance decreased by adding more Monte-Carlo samples. We also have to note that \cite{arora2019exact} reports significant accuracy drops on a CNN of width $n=512$ when using a single-sample Monte-Carlo estimate for the NTK instead of the exact limit NTK. However, they haven't provided any results for $M > 1$, therefore, this accuracy drop could be caused by large variance of $\hat\Theta$. \subsection{NTK for attention layers} A neural tangent kernel is typically considered for architectures for which analytical computation is available, i.e. for fully-connected and convolutional ReLU nets, see \cref{sec:limit}. One of the necessary conditions for exact computations to be possible is the fact that the output of each individual pre-activation neuron becomes a Gaussian process in the limit of large width. This allows one to apply Master theorem (\cref{thm:master_theorem}), and express the NTK as an expectation over certain Gaussian variables. However, there exist layers which does not enjoy Gaussian behavior even in the limit of large width. Attention layer is one of the examples: \begin{equation} f(x) = \mathrm{Softmax}\left(G(x)\right) V(x), \qquad G(x) = \frac{1}{\sqrt{n}} Q^T(x) K(x), \end{equation} where we define queries $Q(x) = x W_Q$, keys $K(x) = x W_K$, and values $V(x) = x W_V$. Dimensions of the corresponding matrices are: $W_Q \in \mathbb{R}^{n_0 \times n}$, $W_K \in \mathbb{R}^{n_0 \times n}$, and $W_V \in \mathbb{R}^{n_0 \times n_H}$, and $x \in \mathbb{R}^{d \times n_0}$. If $W_Q$ and $W_K$ are independent with iid zero mean unit variance entries then $G_{\alpha\beta}(x) = n^{-1/2} \sum_{i=1}^n \sum_{j,k=1}^{n_0} x_{\alpha,j} x_{\beta,k} W_Q^{ji} W_K^{ki}$ converges by CLT to a Gaussian variable. The resulting limit matrix is therefore $d \times d$ matrix with (non-degenerate) Gaussian entries. Since $d$ stays fixed as $n \to \infty$, we cannot apply any limit theorem to reason about the distribution of $f_i(x)$ for some $i \in [n_H]$. \cite{hron2020infinite} consider a multi-head attention layer and show that it does enjoy Gaussian process behavior as width and number of heads go to infinity simultaneously: \begin{equation} f(x) = [f^1(x), \ldots, f^n(x)] W_O, \qquad f_i(x) = \mathrm{Softmax}\left(G_i(x)\right) V_i(x), \qquad G_i(x) = \frac{1}{\sqrt{n}} Q_i^T(x) K_i(x), \end{equation} where $W_O \in \mathbb{R}^{n_H n \times n_H}$ and all $Q_i$, $K_i$, and $V_i$ are iid for different $i \in [n]$. To gain some intuition about the result of \cite{hron2020infinite}, consider $n_H=1$, i.e. outputs of all individual heads are scalars and the final output is also a scalar. In this case, $f(x)$ is a product of a vector with $n$ iid entries and a matrix with iid $\mathcal{N}(0,n^{-1})$ entries. This product tends to a Gaussian as $n \to \infty$ by CLT. Considering a set of inputs gives a random Gaussian vector similar to the fully-connected case, see \cref{sec:limit_fc_nets}. \cite{hron2020infinite} gives exact formulas for covariances $q(x,x')$ and the kernel $\Theta(x,x')$; they are implemented as layers in NeuralTangents \cite{novak2019neural}. \section{Computational aspects} \label{sec:computations} \subsection{Inference optimizations} Suppose one is able to compute (or approximate) the limit kernel, $\Theta(x,x')$, on any pair of points $(x,x')$. The result of kernel regression at convergence ($t \to \infty$) in the limit of inifinite width is then given by (see Eq.~(\ref{eq:lin_solution_square_loss})): \begin{equation} f_\infty(x) = f_0(x) - \Theta(x, \vec x) \Theta^{-1}(\vec x, \vec x) (f_0(\vec x) - \vec y). \label{eq:inf_wide_solution_square_loss} \end{equation} where $\Theta(\vec x, \vec x) \in \mathbb{R}^{m \times m}$ and $\Theta(x, \vec x) \in \mathbb{R}^{1 \times m}$. For multi-class problems, $f(x) \in \mathbb{R}^k$, where $k$ is the number of classes, and the kernel evaluated at two points becomes a $k \times k$ matrix: \begin{equation} \hat\Theta_{jj'}(x,x') = \nabla_\theta^T f^j(x) \nabla_\theta f^{j'}(x'). \end{equation} Define a Gram matrix as $\hat\Theta_{ik+j,i'k+j'}(\vec x, \vec x) = \hat\Theta_{jj'}(x_i,x_{i'})$ and its limit counterpart $\Theta(\vec x, \vec x) \in \mathbb{R}^{mk \times mk}$ accordingly; similarly for $\Theta(x, \vec x) \in \mathbb{R}^{k \times mk}$. If one defines $f_0^{ik+j}(\vec x) = f_0^j(x_i)$, the corresponding solution takes the same form as Eq.~(\ref{eq:inf_wide_solution_square_loss}). Evaluating this quantity naively requires storing and inverting the kernel Gram matrix $\Theta(\vec x, \vec x) \in \mathbb{R}^{mk \times mk}$. Storing it requires $O(m^2 k^2)$ memory, while inverting it takes $O(m^3 k^3)$ time, making such a naive approach computationally infeasible for datasets with $m k \gtrsim 10^4$ (nevertheless, for small datasets, the naive approach for computing the NTK estimator (\ref{eq:inf_wide_solution_square_loss}) is feasible and may provide advantage over traditional SGD training, see \cite{arora2019harnessing}). Let us start with discussing two important optimizations implemented in Neural Tangents \cite{novak2019neural}. Note that as discussed in \cref{sec:limit}, for a fully-connected net (and, in fact, for any tensor program, see \cite{yang2019tensor_i}) preactivations of different neurons on a given layer become iid as width goes to infinity. This implies $\Theta_{jj'}(x,x') = \Theta_{11}(x,x') 1_{j=j'}$. Therefore the kernel Gram matrix has a block structure: $\Theta(\vec x, \vec x) = \Theta|_{k=1}(\vec x, \vec x) \otimes I_{k \times k}$. This reduces memory footprint to $O(m^2)$ and the time requirement to $O(m^3)$. The second optimization deals with convolutional networks. Note that computing $\Theta(x,x')$ requires computing all intermediate covariances $q_l(x,x')$. These covariances were scalars for fully-connected nets since different neurons of a given layer became iid as width went to infinity. However, for an image with $d$ pixels, different pixels of a given layer are dependent since their preactivations are computed using same weight matrices. That's why for convolutional nets, one has to construct intermediate covariance matrices of size $d \times d$; storing and computing them for each pair of points requires $O(m^2 d^2)$ memory and time, even surpassing the time required for Gram matrix inversion when $d^2 > m$ (this happens e.g. for CIFAR10 for which $d = 32 \times 32 = 1024$, $m = 50 000$, $k = 10$). However, as was noted e.g. in \cite{xiao2018dynamical}, if no pooling is used in the network, it suffices to compute and store $d$ independent $m \times m$ blocks of this covariance matrix, boiling down to $O(m^2 d)$ time requirement which is usually not greater than $O(m^3)$ time required for inversion. So far, the main computational bottleneck was the time required for inverting the kernel Gram matrix. This problem is not specific for NTK; it appears for any regularized kernel regression problem: \begin{equation} \hat f_\lambda = \argmin_{f \in \mathcal{H}} \sum_{j=1}^m \ell(y_j, f(x_j)) + \lambda \| f \|_\mathcal{H}^2. \label{eq:kernel_regression} \end{equation} Here $\mathcal{H}$ is a Hilbert space of functions of the form $f(x) = \Phi^T(x) \theta$; the corresponding scalar product is $\langle \Phi^T(x) \theta, \Phi^T(x) \theta' \rangle = \theta^T \theta'$. Hence $\| f \|_\mathcal{H}^2 = \langle f, f \rangle = \|\theta\|_2^2$ for $f(x) = \Phi^T(x) \theta$. Problem~(\ref{eq:kernel_regression}) has an associated kernel, which we denote with the same letter as NTK: $\Theta(x,x') = \Phi^T(x) \Phi(x')$. Due to the representer theorem \cite{kimeldorf1970correspondence}, any solution of Problem~(\ref{eq:kernel_regression}) has the form $f(x) = \sum_{j=1}^m \alpha_j \Theta(x,x_j)$. For now, consider quadratic loss: $\ell(y,z) = \frac{1}{2} \| y - z \|_2^2$. The problem above becomes: \begin{equation} \vec\alpha = \argmin_{\vec\alpha \in \mathbb{R}^m} \frac{1}{2} \sum_{j=1}^m \left( \sum_{j'=1}^m \alpha_{j'} \Theta(x_j,x_{j'}) - y_j \right)^2 + \lambda \left\| \sum_{j=1}^m \alpha_j \Phi(x_j) \right\|_2^2. \end{equation} This problem is convex, therefore any critical point of the corresponding functional is a solution: \begin{equation} (\Theta(\vec x, \vec x) + \lambda I) \vec\alpha = \vec y. \end{equation} As long as $\Theta(\vec x, \vec x) + \lambda I$ is invertible, the solution is $\vec\alpha = (\Theta(\vec x, \vec x) + \lambda I)^{-1} \vec y$. Putting $\lambda = 0$, we recover expected Eq.(\ref{eq:inf_wide_solution_square_loss}) (since $\mathbb{E}\, f_0(x) = 0$). While the represeneter theorem guarantees that it suffices to look for solutions only of the form $f(x) = \sum_{j=1}^m \alpha_j \Theta(x,x_j)$ instead of inspecting the whole $\mathcal{H}$, we, following \cite{meanti2020kernel}, consider further contracting the search space by sampling $m'$ points $(\tilde x_1, \ldots, \tilde x_{m'})$ uniformly out of $m$ and looking for solutions of the form $f(x) = \sum_{j=1}^{m'} \tilde\alpha_j \Theta(x,\tilde x_j)$. This is known as Nystr\"om approximation. The minimization problem then becomes: \begin{equation} \vec{\tilde\alpha} = \argmin_{\vec{\tilde\alpha} \in \mathbb{R}^{m'}} \frac{1}{2} \sum_{j=1}^m \left( \sum_{j'=1}^{m'} \tilde\alpha_{j'} \Theta(x_j,\tilde x_{j'}) - y_j \right)^2 + \lambda \left\| \sum_{j=1}^{m'} \tilde\alpha_j \Phi(\tilde x_j) \right\|_2^2. \end{equation} This problem is again convex and its critical points satisfy the following: \begin{equation} \left(\Theta\left(\vec{\tilde x}, \vec x\right) \Theta\left(\vec x, \vec{\tilde x}\right) + \lambda \Theta\left(\vec{\tilde x}, \vec{\tilde x}\right)\right) \vec{\tilde \alpha} = \Theta\left(\vec{\tilde x}, \vec x\right) \vec y. \label{eq:critical_points_nystrom} \end{equation} Computing the kernel-kernel product takes $O(m {m'}^2)$ time and solving the above system directly takes $O({m'}^3)$ time. The space requirement can be put to $O({m'}^2)$ as the "rectangular Gram matrix" can be computed in $m' \times m'$ blocks. Conjugate gradient methods are iterative methods designed for approximately solving linear systems of the form $A \vec z = \vec b$ without explicitly inverting the matrix $A$. The main operation used by these methods on each iteration is a matrix-vector product. In our case, the matrix-vector product requires $O(mm' + {m'}^2)$ time; note that it allows one to avoid computing the kernel-kernel product explicitly, by computing two matrix-vector product instead, costing $O(mm')$ time each. Putting all together, solving system~(\ref{eq:critical_points_nystrom}) with $s$ iterations of a conjugate gradient method requires $O(s(mm' + {m'}^2))$ time and $O({m'}^2)$ space. Based on certain theoretical results, \cite{meanti2020kernel} suggest taking $m' = O(\sqrt{m})$ and $s = O(\log m)$. The resulting $O(m \sqrt{m} \log m)$ time and $O(m)$ space allows for applying their method to datasets of size up to $m \sim 10^6$ (the size of ImageNet). \cite{meanti2020kernel} also discuss several optimizations aiming for improving GPU-efficiency of the method. While their method is publicly available as an open-source library\footnote{\url{https://github.com/FalkonML/falkon}}, we are not aware of any of its applications to NTK. \subsection{Computing the empirical kernel} All the previous discussion of the current section assumed that the kernel, $\Theta$, can be efficiently computed. This is the case for certain models for which analytic computations are available. Indeed, for $L$-layer fully-connected nets, the limit Gram matrix $\Theta(\vec x, \vec x)$ can be computed in $O(m^2 L)$ time while storing it requires $O(m^2)$ space, see Eqs. (\ref{eq:q_iteration}) and (\ref{eq:Theta_iteration}). For more complex models, e.g. for those including max-poolings, closed-form analytic expressions for the limit kernel are not currently available. However, the empirical kernel, $\hat\Theta$, can always be computed explicitly and is close to $\Theta$ for sufficiently large width (see convergence theorems in \cref{sec:convergence}). For this reason, we are looking for ways to compute $\hat\Theta$ efficiently. In order to simplify the illustration, we will discuss only time requirements in the sequel. Recall the empirical kernel is a product of two jacobians: $\hat\Theta_{jj'}(x,x') = \nabla^T_\theta f^j(x) \nabla_\theta f^{j'}(x')$. Therefore the time cost for computing the kernel consists of the time required to compute the jacobian and the time required for jacobian contraction. Denote $[FP]$ the cost of a single forward pass for our network; a single backward pass has approximately the same cost. Then computing a jacobian for a given point $x$ takes $O(k [FP])$ time. Contracting two jacobians for fixed $j$ and $j'$ takes $O(N)$ time, where $N$ is the total number of parameters: $\theta \in \mathbb{R}^N$. Putting all together, computing the full $mk \times mk$ Gram matrix takes $O(m k [FP] + m^2 k^2 N)$ time. \cite{novakfast} propose a method for computing the NTK-vector product. It can be directly embedded into the method of \cite{meanti2020kernel} using conjugate gradients, or used for computing the kernel explicitly by applying it to columns of the $k \times k$ identify matrix. Their method boils down to casting a matrix-vector product where the matrix is the empirical NTK to a vector-jacobian product followed by a jacobian-vector product: $\sum_{j'=1}^k \hat\Theta_{jj'}(x,x') v_{j'} = \nabla^T_\theta f^j(x) \sum_{j'=1}^k \nabla_\theta f^{j'}(x') v_{j'}$. Both matrix-vector products can be computed in $O([FP])$ time. Therefore this method allows to compute the full $mk \times mk$ Gram matrix in $O(m^2 k [FP])$ time, which improves over the jacobian contraction method as long as $[FP] < C k N$ for a certain constant $C$. Memory requirements that we do not show here are, in fact, same for both methods, see \cite{novakfast}. \cite{novakfast} also propose another optimization exploiting certain stucture of the function $f$: e.g. weights of a fully-connected net are aligned sequentially, while weights of a convolutional layer are aranged in blocks. We do not discuss it in the present survey. Both optimizations are publicly available as JAX \cite{jax2018github} function transformations.\footnote{\url{https://github.com/iclr2022anon/fast_finite_width_ntk}}. \section{Applications} \subsection{A kernel method} \subsubsection{Supervised learning on small datasets} The NTK is a kernel, therefore it can be used in any kernel method itself, i.e. kernel ridge regression or kernel SVM. However, computing the kernel Gram matrix on a dataset of size $m$ requires $O(m^2)$ time, which is infeasible for large datasets. One can either rely on certain approximations, e.g. Nystr\"om approximation, see \cref{sec:computations}, or restrict oneself to small datasets. One possible advantage of kernel methods over neural nets is lower variance. Indeed, the only variance of a kernel method is induced by sampling the dataset, while a neural network has several more sources of variance; e.g. initialization randomness and batch sampling. It is likely that this difference in variances is especially important when the dataset is small. The other advantage of kernel methods is having smaller number of hyperparamaters compared to neural nets. This makes kernel methods useful as robust baseline methods that may outperform large neural nets in a situation when there is no budget for careful hyperparamater tuning. As an illustration, \cite{arora2019harnessing} demonstrated that kernel regression with 14-layer CNTK consistently outperforms ResNet-34 trained with standard hyperparameters on a random subset of CIFAR-10 with $\leq 640$ samples. \subsubsection{Neural architecture search using NTK conditional number} There are other setups where computing the Gram matrix on a small dataset is sufficient. For example, \cite{chen2021neural} proposes a condition number of the NTK Gram matrix as a proxy-measure of a given architecture performance; this proxy-measure is then used to guide neural architecture search (NAS). In this case, we do not need the Gram matrix itself but only the condition number, which motivates computing the matrix on a small subset of examples. While the condition number on a random subset Gram matrix provides only a random estimate, possibly noisy and biased, of a true condition number, the way we use it does not require exact estimates. Indeed, a performance measure in NAS algorithms is mainly used to cut-off pathologic, low-performing models from a population, rather than finding the best one. Therefore any measure that correlates positively with performance suffices. The use of condition number as a proxy-measure of performance relies on two hypotheses: (1) performance correlates with trainability, and (2) trainability correlates with NTK condition number. The first hypothesis is mainly motivated by a natural implication "bad trainability implies low performance". To motivate the second hypothesis, let us consider kernel ridge regression trained with usual discrete-time gradient descent: \begin{equation} f_{t+1}(\vec x) = f_t(\vec x) + \eta \Theta(\vec x, \vec x) (\vec y - f_t(\vec x)), \end{equation} where now $t$ is a discrete time-step and $\eta$ is a learning rate. Consider eigenvalue decomposition of the kernel: $\Theta(\vec x, \vec x) = \sum_{k=1}^m \lambda_k \vec v_k \vec v_k^T$, where $\lambda_1 \geq \ldots \geq \lambda_m \geq 0$, and $(\vec v_k)_{k=1}^m$ forms an orthonormal basis. Let us decompose our model's predictions as $f_t(\vec x) = \sum_{k=1}^m u_{t,k} \vec v_k$. Then the dynamics above decomposes as \begin{equation} u_{t+1,k} = u_{t,k} + \eta \lambda_k (\vec y^T \vec v_k - u_{t,k}). \end{equation} This gives \begin{equation} u_{t+1,k} - \vec y^T \vec v_k = (1 - \eta \lambda_k) (u_{t,k} - \vec y^T \vec v_k), \end{equation} and the solution is therefore \begin{equation} u_{t,k} = \vec y^T \vec v_k + (1 - \eta \lambda_k)^t (u_{0,k} - \vec y^T \vec v_k). \end{equation} The dynamics above converges as $t \to \infty$ for any $u_{0,k}$ if and only if $\eta < 2 / \lambda_k$. Since this should hold for all $k \in [m]$ and the maximal $\lambda$ is $\lambda_1$, we need to have $\eta < 2 / \lambda_1$. Therefore the $m$-th principal component converges at rate $\eta \lambda_m < 2 \lambda_m / \lambda_1$. $\kappa = \lambda_m / \lambda_1$ is our condition number. We see that small condition number implies low trainability and thus, by the first hypothesis, low performance. Using a combination of two proxy-measures, the condition number and the number of linear regions (we do not discuss it here), \cite{chen2021neural} constructed a NAS method that provided state-of-the-art performance on NAS-Bench-201 \cite{dong2020bench}, while using much smaller time compared to most of the other methods. \cite{chen2021neural} tested their method on CIFAR10 and ImageNet as well. In both cases, their method demonstrated competetive performance while using orders of magnitude less time. \subsubsection{Matrix completion and image impainting} In some cases, posing the problem as kernel regression allows for certain optimizations. In particular, \cite{radhakrishnan2021simple} proposed approaching the problem of matrix completion by minimizing the following loss: \begin{equation} \mathcal{L}(\theta) = \sum_{(i,j) \in S} (Y_{ij} - \tr(f(Z;\theta) M^{(ij)}))^2, \end{equation} where $S \subset [k] \times [d]$ is a set of coordinates of known entries of the target matrix $Y \in \mathbb{R}^{k \times d}$, $M^{(ij)} \in \mathbb{R}^{k \times d}$ has $1$ at position $(i,j)$ and $0$ elsewhere, $f(\cdot;\theta)$ is a neural network with parameters $\theta$, $n_0$ inputs and $k$ outputs, and $Z \in \mathbb{R}^{n_0 \times d}$ is an a-priori given matrix. The model $f$ is applied to each column of $Z$ seperately, therefore $f(Z;\theta)$ is $k \times d$ matrix. The above setup can be treated as a usual $l_2$ regression problem on a dataset $(Y_{ij}, M^{(ij)})_{(i,j) \in S}$. The corresponding empirical NTK is defined as $\hat K(M^{(ij)}, M^{(i'j')}) = \nabla^T_\theta \tr(f(Z;\theta) M^{(ij)}) \nabla_\theta \tr(f(Z;\theta) M^{(i'j')})$. Naturally, it does not depend on target matrix entries $Y$, and since there is only a finite set of possible inputs $M^{(ij)}$ (namely, $k d$), the resulting $kd \times kd$ Gram matrix will be the same for all possible matrix completion problems of a given target matrix dimensions. In other words, one can precompute the Gram matrix once and use it to all possible matrix completion problems of given dimensions. In contrast, original neural network formulation would require training a new network for each dataset $(Y_{ij}, M^{(ij)})_{(i,j) \in S}$. When $f(\cdot;\theta)$ is given by a fully-connected network with $L$ layers, \cite{radhakrishnan2021simple} provide a closed-form formula for its limit NTK: $K(M^{(ij)}, M^{(i'j')}) = \kappa_L\left(z_{\cdot,j}^T z_{\cdot,j'}\right) 1_{i=i'}$, where $\kappa_L$ is given by a certain recurrent relation. As we see, according to this kernel, elements of different rows of $Y$ are orthogonal (does not effect each other), while similarity of elements of the same row is given by a scalar product of the corresponding columns of $Z$. Therefore columns of $Z$ encodes a-priori similarities between columns of $Y$. The matrix $Z$ is called a feature-prior matrix. The ideal feature-prior matrix would be the target matrix $Y$ itself. Since one does not have access to it, \cite{radhakrishnan2021simple} suggest using the output $\hat Y$ of a separate matrix completion method instead. The resulting joint method performs better than the backbone one on popular collaborative filtering and virtual drug screening datasets. Image impainting can be viewed as a special case of matrix completion. Apart from using the same Gram matrix for all problems of a given size, image impainting with convolutional networks allows for one more optimization. When $f$ is a convolutional network, we pose the problem a bit differently to above. Suppose $f$ has $n_0$ input channels, $1$ output channel, and it maps an image to an image of the same size. Suppose $Z \in \mathbb{R}^{n_0 \times 2^p \times 2^q}$ and it is treated as a $2^p \times 2^q$ image with $n_0$ channels. This in contrast to the previous considerations, where $Z$ was a matrix with columns treated as different inputs to a vector-valued model. Similar to the above, $Y \in \mathbb{R}^{2^p \times 2^q}$ is a target image, and $M^{(ij)}$ of the same size has $1$ at $(i,j)$ and zero elsewhere. Note that $f$ applied to the "image" $Z$ has $2^p \times 2^q$ output and therefore its NTK $\Theta$ is a $2^p \times 2^q \times 2^p \times 2^q$ tensor. Suppose $f$ has no downsampling or upsampling layers. \cite{radhakrishnan2021simple} provides exact formula for the corresponding limit NTK in terms of the limit NTK of the model $f$ in this case: $K(M^{(ij)}, M^{(i'j')}) = \Theta(Z,Z)_{i,j,i',j'}$. Now suppose $f$ has $s$ downsampling and $s$ upsampling layers. Computing the Gram matrix for its NTK requires $O(2^{2p+2q})$ memory and $O(L 2^{2p+2q})$ time, where $L$ is the number of convolutions in $f$. It is already prohibitive for moderate-size images, i.e. when $p, q \approx 10$. \cite{radhakrishnan2021simple} propose a way to reconstruct the $2^p \times 2^q \times 2^p \times 2^q$ Gram matrix from a smaller Gram matrix of size $2^{2s+p+q}$. Moreover, this smaller Gram matrix requires computing the "usual" Gram matrices only for images of size $2^{s+1} \times 2^{s+1}$ which requires only $O(L 2^{4s})$ time. \subsubsection{Approximate integration with application to federated learning} Even in the case when the NTK Gram matrix can be computed and stored, the exact solution (\ref{eq:inf_wide_solution_square_loss}) requires inverting the kernel Gram matrix, which costs $O(m^3)$ when performed naively. Fortunately, mixing continuous-time and discrete-time formulations allows one to avoid computing the inverse explicitly. Denote $H_{t,ij} = \hat\Theta_t(x_i,x_j)$, $Z_{t,ik} = \partial_{\theta_i} f(x_k;\theta)$, and $u_{t,k} = f_t(x_k)$. Note that $H_t = Z_t^T Z_t$. Discrete-time weight evolution with learning rate $\eta$ is given by \begin{equation} \theta_{t+1} = \theta_t + \eta Z_t (\vec y - \vec u_t). \end{equation} Recall that assuming stationary kernel $H_t = H_0$ is equivalent to assuming stationary jacobian $Z_t = Z_0$. With this assumption, the dynamics above is solved as \begin{equation} \theta_t = \theta_0 + \eta Z_0 \sum_{s=0}^{t-1} (\vec y - \vec u_s). \end{equation} Recall that integrating continuous-time gradient descent dynamics under assumption $H_t = H_0$ gives \begin{equation} \vec u_s = \vec y + e^{-\eta s H_0} (\vec u_0 - \vec y). \end{equation} Combining the two latter equations, we get the weights at any time-step $t$: \begin{equation} \theta_t = \theta_0 + \eta Z_0 \sum_{s=0}^{t-1} e^{-\eta s H_0} (\vec y - \vec u_0). \end{equation} The continuous analogue of the above evolution is obtained by replacing the sum with an integral: \begin{equation} \theta_t = \theta_0 + \eta Z_0 \int_0^t e^{-\eta s H_0} (\vec y - \vec u_0) \, ds = \theta_0 + Z_0 H_0^{-1} \left(I - e^{-\eta s H_0}\right) (\vec y - \vec u_0). \end{equation} Here we get the inverse, as expected. Note that in this approach we do not assume that the network to be infinitely wide, we just assume it to be linear in its weights. This allows us to reason in terms of the network weight vector $\theta_t$ instead of reasoning in terms of some abstract feature space associated to the kernel. This aspect gives us one additional advantage: we can integrate the dynamics up to some time $t_1$ and, since we know the weights $\theta_{t_1}$, compute $Z_{t_1}$ and $H_{t_1}$. We can then proceed integration with these updated matrices. This method lies in between the usual gradient descent training and kernel gradient descent with constant kernel. The latter never updates the kernel, while the former updates the kernel at each timestep. In contrast, the method we discuss updates the kernel only at given timesteps. The approach under discussion requires computing and storing $Z$ of size $N \times m$, which is an obvious disadvantage. As a remedy, \cite{yue2021neural} propose splitting the job of computing $Z$ between several workers. A server joins the parts together, integrates the dynamics up to some timestep $t$, and sends $\theta_t$ to all of the workers, starting a new iteration. Tuning the timesteps of kernel updates may help balancing load between the server and the workers. The data used to compute $Z$ is never stored on the server, making this approach promising for federated learning. However, since the server may attempt reconstructing the data from $Z$, one has to ensure each worker's privacy cannot be compromised; see \cite{yue2021neural} for further details. \subsection{Pathology analysis} \begin{figure}[t] \centering \subfigure[Ground truth]{\includegraphics[width=0.18\textwidth]{images/div2k_8_gt.jpeg}} \subfigure[No mapping]{\includegraphics[width=0.18\textwidth]{images/div2k_8_no_enc.jpeg}} \subfigure[Basic]{\includegraphics[width=0.18\textwidth]{images/div2k_8_basic.jpeg}} \subfigure[Positional enc.]{\includegraphics[width=0.18\textwidth]{images/div2k_8_posenc.jpeg}} \subfigure[Gaussian]{\includegraphics[width=0.18\textwidth]{images/div2k_8_rff.jpeg}} \caption{Images are borrowed from \cite{tancik2020fourier}.} \label{fig:image_regression} \end{figure} \begin{figure} \centering \subfigure{\includegraphics[width=0.2\textwidth]{images/dragon_crop_0.png}} \subfigure{\includegraphics[width=0.2\textwidth]{images/3D_MRI_no_encoding.png}} \subfigure{\includegraphics[width=0.2\textwidth]{images/nerf_no_encoding.png}} \\ \subfigure[3D shape regression]{\includegraphics[width=0.2\textwidth]{images/dragon_crop_1.png}} \subfigure[MRI reconstruction]{\includegraphics[width=0.2\textwidth]{images/3D_MRI_gaussian_encoding.png}} \subfigure[Inverse rendering]{\includegraphics[width=0.2\textwidth]{images/nerf_gaussian_18.png}} \caption{Images are borrowed from \cite{tancik2020fourier}.} \label{fig:low_dim_regression} \end{figure} While the empirical NTK of a neural network is not the same as its limit NTK, they may have certain properties in common. In particular, certain issues of a finite-width network may reflect in certain issues of its limit NTK, and fixing these issues in the limit NTK may result in fixing them in a finite-width net. As an example where this approach is proven to work, consider image regression. In this task, input samples are image coordinates, $x \in [0,1]^d$ for $d=2$, and targets are pixel colors; we assume grey-scale images with $y \in [0,1]$. The task is therefore to regress the full image given a set of pixels. Let us consider applying a fully-connected network for this task. As we have already observed in \cref{sec:limit_fc_nets}, the limit NTK $\Theta(x,x')$ of a fully-connected network depends only on $x^T x$, $x^{\prime,T} x'$, and $x^T x'$. All of these terms are rotation-invariant, hence the kernel itself is rotation-invariant. However, none of this terms is translation-invariant, hence the kernel cannot be translation-invariant (otherwise, it has to be constant). Therefore it is quite unlikely that the empirical kernel will be invariant to translations. On the other hand, both translation and rotation invariance are desirable for a kernel used for image regression. Indeed, this means that applying these transformations to the train set of pixels results in the same image as without them, up to translation and rotation. In order to achieve this property, one may start working on translationaly invariant embeddings of image coordinates. The simplest non-trivial embedding of this kind is $z(x) = [\cos(2\pi x), \sin(2\pi x)]^T$, where $\cos$ and $\sin$ are applied elementwise. Following \cite{tancik2020fourier}, we shall refer it as "basic". Comparing (b) and (c) of Figure~\ref{fig:image_regression}, this indeed results in better perceived quality. However the regressed image is still blurry: see Figure~\ref{fig:image_regression} (c). As we shall see shortly, NTK kernel regression learns low-frequency components of the image before its high-frequency ones. If we assume that the same property holds for the corresponding finite-width net then achieving sharp images may be impossible for a given number of gradient steps. Recall the training dynamics of a kernel regression with kernel $\Theta$ trained to minimize square loss on a training dataset $(\vec x, \vec y)$: \begin{equation} \dot f_t(\vec x) = \Theta(\vec x, \vec x) (\vec y - f_t(\vec x)). \end{equation} $\Theta$ is a kernel, therefore its Gram matrix is positive-semidefinite. Consider its eigenvalue decomposition: $\Theta(\vec x, \vec x) = \sum_{k=1}^m \lambda_k \vec v_k \vec v_k^T$, where $\lambda_1 \geq \ldots \geq \lambda_m \geq 0$, and $(\vec v_k)_{k=1}^m$ forms an orthonormal basis. Let us decompose our model's predictions as $f_t(\vec x) = \sum_{k=1}^m u_{t,k} \vec v_k$. Then the dynamics above decomposes as \begin{equation} u_{t,k} = \lambda_k (\vec v_k^T \vec y - u_{t,k}), \end{equation} which solves as \begin{equation} u_{t,k} = \vec v_k^T \vec y - e^{-\lambda_k t} (\vec v_k^T \vec y - u_{0,k}). \end{equation} As one clearly sees, time required to learn the $k$-th principal component of the target is inversely proportional to its strength $\lambda_k$. In other words, strong components are learned before weak ones. The question is: what are the eigenvectors of the NTK Gram matrix? It is hard to answer this question in general since a Gram matrix depends on the dataset. However, for a kernel, there is an analogue of eigenvalue decomposition called Mercer's representation. Let $X$ be a compact metric space and let $\mu$ be a sigma-additive measure on $X$ with $\supp \mu = X$. Suppose $K: \; X \times X \to \mathbb{R}$ is continuous, symmetric, and satisfies $\int_X \int_X K(x,x') f(x) f(x') \, d\mu(x) \, d\mu(x') < \infty$ $\forall f \in L^2_\mu(X)$. Define Gram-Schmidt operator $T_K: L^2_\mu(X) \to L^2_\mu(X)$ as $T_K[f](x) = \int_X K(x,x') \, d\mu(x')$. Then the above operator admits an eigenvalue decomposition with eigenfunctions $(\psi_k)_{k=1}^\infty$ and corresponding eigenvalues $(\lambda_k)_{k=1}^\infty$, and the set of eigenfunctions forms an orthonormal basis in $L^2_\mu(X)$. The Mercer's representation is the corresponding decomposition of the kernel: \begin{equation} K(x,x') = \sum_{k=1}^\infty \lambda_k \psi_k(x) \psi_k(x'). \end{equation} The series converges uniformly in $X \times X$. From the above, we have $\int_X \int_X K(x,x') \psi_k(x) \psi_k(x') \, d\mu(x) \, d\mu(x') = \lambda_k$ $\forall k \geq 1$. Hence if $\vec x = (x_k)_{k=1}^m$ and $\vec x' = (x'_k)_{k=1}^m$ are sampled iid from $\mu$ then \begin{multline} \frac{1}{m^2} \psi_k^T(\vec x) K(\vec x, \vec x') \psi_k(\vec x') =\\= \frac{1}{m^2} \sum_{i,j=1}^m K(x_i, x'_j) \psi(x_i) \psi(x'_k) \to \int_X \int_X K(x,x') \psi_k(x) \psi_k(x') \, d\mu(x) \, d\mu(x') = \lambda_k \end{multline} a.s. as $m \to \infty$ by the Law of Large Numbers (LLN). Note that considering $\psi^T(\vec x) K(\vec x, \vec x) \psi(\vec x)$ instead of $\psi^T(\vec x) K(\vec x, \vec x') \psi(\vec x')$ may result in a different limit because the diagonal of $K$ is now calculated on two dependent arguments. Nevertheless, there are only $m$ elements on the diagonal, which results in $O(m^{-1})$ error vanishing in the limit. Hence \begin{equation} \frac{1}{m^2} \psi_k^T(\vec x) K(\vec x, \vec x) \psi_k(\vec x) \to \lambda_k \end{equation} a.s. as $m \to \infty$. In other words, given $\vec x$ sampled iid from $\mu$, $(\psi_k(\vec x))_{k=1}^m$ are approximately the eigenvectors of $K(\vec x, \vec x)$ with eigenvalues $(m^2 \lambda_k)_{k=1}^m$. Recall that, as was noted above, the limit NTK of a fully-connected net $\Theta(z,z')$ depends only on $z^T z'$, $\|z\|_2$, and $\|z'\|_2$. Recall also that we have decided to embed inputs with $z(x) = [\cos(2\pi x), \sin(2\pi x)]^T$. This embedding maps $[0,1]^d$ on a $d$-dimensional torus that lies inside a $2d-1$-dimensional sphere. In this case, our $\Theta(x,x') = \Theta(z(x),z(x'))$ depends only on $z^T(x) z(x')$. Kernels with this property are called zonal. Any zonal kernel $K: S^{p-1} \times S^{p-1} \to \mathbb{R}$ admits the following Mercer's decomposition with respect to the uniform measure on $S^{p-1}$: \begin{equation} K(z^T z') = \sum_{k=0}^\infty \lambda_k \sum_{j=1}^{N(p,k)} Y_{k,j}(z) Y_{k,j}(z'), \end{equation} where $N(p,k)$ are so-called Gegenbauer polynomials and $Y_{k,j}$ are spherical harmonics. For $p=2$, this decomposition gets a simpler form: \begin{equation} K(z^T z') = \frac{1}{4\pi^2} + \frac{1}{\pi^2} \sum_{k=1}^\infty \lambda_k \cos(k \arccos(z^T z')). \label{eq:mercer_zonal_2d} \end{equation} As we see, large $k$'s correspond to high-frequency harmonics, while small $k$'s correspond to low-frequency ones. A recent result of \cite{chen2020deep} states that the NTK of a fully-connected net with inputs lying on $S^{p-1}$ has eigenvalues decaying as a power-law: $\lambda_k \sim k^{-p}$ as $k \to \infty$; see also \cite{geifman2020similarity} for an earlier result for shallow nets and \cite{bietti2019inductive} for an even earlier result for bias-free shallow nets. This means that learning the $k$-th harmonic of the input image requires $O(k^p)$ time. Hence for a finite amount of training steps, high-frequency components remain not learned, which results in blurry images similar to Figure~\ref{fig:image_regression} (c). The possible remedy would be increasing $\lambda_k$ for large $k$. But how to achieve it? We illustrate the solution proposed in \cite{tancik2020fourier} in the following. Consider the case $d=1$ for simplicity. In this case, the embedding map $z(x) = [\cos(2\pi x), \sin(2\pi x)]^T$ traverses a circle. Consider a modified embedding $\tilde z(x) = [\cos(2\pi b x), \sin(2\pi b x)]^T$ instead, where $b \in \mathbb{N}$ is a tunable parameter. The corresponding kernel is then given as \begin{multline} K(\tilde z^T \tilde z') = \frac{1}{4\pi^2} + \frac{1}{\pi^2} \sum_{k=1}^\infty \lambda_k \cos(k \arccos(\tilde z^T \tilde z')) =\\= \frac{1}{4\pi^2} + \frac{1}{\pi^2} \sum_{k=1}^\infty \lambda_k \cos(4\pi k b (x-x')) = \frac{1}{4\pi^2} + \frac{1}{\pi^2} \sum_{k=1}^\infty \lambda_k \cos(k b \arccos(z^T z')), \end{multline} which means that $\lambda_k$ becomes the $kb$-th eigenvalue in the original embedding space. If $\lambda_k$ decreased monotonically this would mean that each $kb$-th eigenvalue increased from $\lambda_{kb}$ to $\lambda_k$, implying faster convergence to $kb$-th principal component. The obvious downside of the method above is that in a new parameterization some of the eigenvalues become zero --- therefore they are never learned. A simple solution is to enlarge the embedding: $\tilde z(x) = [\cos(2\pi \sigma^{j/M} x), \sin(2\pi \sigma^{j/M} x)]^T$, where $M \in \mathbb{N}$ and $\sigma \in \mathbb{R}_+$ are tunable parameters; this referred as "positional encoding" in \cite{tancik2020fourier}. Another solution proposed by \cite{tancik2020fourier} is random Gaussian projections: $\tilde z(x) = [\cos(2\pi B x), \sin(2\pi B x)]^T$, where $B \in \mathbb{R}^{M \times d}$, each element of $B$ is sampled independently from $\mathcal{N}(0,\sigma^2)$, and $M$ and $\sigma$ are tunable parameters. Both solution perform on par with each other and much better than the original embedding: compare (c), (d), and (e) in Figure~\ref{fig:image_regression}. The same method suites other low-dimensional regression problems as well; \cite{tancik2020fourier} provide examples of 3D shape regression, MRI reconstruction, and inverse rendering. See Figure~\ref{fig:low_dim_regression} for comparison of outputs of a neural net with no enconding of inputs (top row) and the proposed Gaussian encoding (bottom row). One more notable example is Solid Isotropic Material Penalisation, an instance of topology optimization. The task here is to optimize over material density at $N$ points $y \in [0,1]^N$ to obtain a shape that can withstand forces applied at certain points. Given a density $y$ and a force vector $F$, the SIMP method constructs a stiffness matrix $K(y)$, and derives a displacement vector $U(y)$ by solving a linear system $K(y) U(y) = F$. The resulting construction is stable if the forces do not do any work, i.e. $U^T(y) F = 0$. The density is therefore optimized to minimize the work $C(y) = U^T(y) F U(y) \to \min_y$ under a volume constraint $\sum_{i=1}^N y_i = V$; $C$ is usually called compliance. We can cast the constrained optimization problem as an unconstrained one by introducing pre-density $x \in \mathbb{R}^N$ and constructing density as $y_i = \sigma(x_i + b(x))$, where $b$ is a function that ensures the volume constraint. Denoting this operation as $y = \Sigma(x)$, we get a new unconstrained optimization problem in the space of pre-densities: $C(\Sigma(x)) \to \min_x$. While the above problem is not a regression problem, we can still model $x$ as outputs of a neural net at the corresponding grid points. However, lack of translation invariance results in unplausible patterns. \cite{dupuis2021dnn} used a similar embedding scheme as \cite{tancik2020fourier} to control this issue. On the other hand, in contrast to \cite{tancik2020fourier}, \cite{dupuis2021dnn} used $\sin(\omega x)$ as activation instead of ReLU, and used $\omega$ together with bias initialization variance to control sharpness of output shapes, instead of modifying the embedding. Both methods aim to "widen" the spectrum of the limit NTK. \subsection{A theoretical tool} \label{sec:app_theory} Apart from providing a meaningful kernel for kernel methods, NTK can be used as a concept useful for reasoning about neural nets of large width. Indeed, as stated in \cref{sec:convergence}, NTK, while being random and evolving, converges to a constant deterministic limit as width goes to infinity. One can hope that for large enough width, the NTK stays close to its limit with high probability. Therefore, any result valid for kernel regression with NTK taken as a kernel, may become also valid with high probability for a wide enough net. \subsubsection{Global GD convergence} Let us start with the following result valid for kernel regression with a constant kernel: when the kernel is positive-definite, kernel regression learns the dataset. Indeed, recall the training dynamics of a kernel regression with kernel $\Theta$ trained to minimize square loss on a training dataset $(\vec x, \vec y)$: \begin{equation} \dot f_t(\vec x) = \Theta(\vec x, \vec x) (\vec y - f_t(\vec x)). \end{equation} Assuming $\Theta(\vec x, \vec x) \geq \lambda$, \begin{equation} \frac{d}{dt}\left(\frac{1}{2} \| \vec y - f_t(\vec x) \|_2^2\right) = -(\vec y - f_t(\vec x))^T \Theta(\vec x, \vec x) (\vec y - f_t(\vec x)) \leq -\lambda \| \vec y - f_t(\vec x) \|_2^2, \end{equation} which gives \begin{equation} \| \vec y - f_t(\vec x) \|_2^2 \leq e^{-2\lambda t} \| \vec y - f_0(\vec x) \|_2^2. \end{equation} Hence $\lambda > 0$ suffices to guarantee that $f_t(\vec x)$ converges to $\vec y$ as $t \to \infty$. Suppose now our kernel regression uses a random time-dependent kernel $\hat\Theta_t$ instead of $\Theta$: \begin{equation} \dot f_t(\vec x) = \hat\Theta_t(\vec x, \vec x) (\vec y - f_t(\vec x)). \end{equation} If we manage to guarantee that with probability $\geq 1-\delta$ $\forall t \geq 0$ $\hat\Theta_t(\vec x, \vec x) \geq \lambda$ then $\lambda > 0$ suffices to guarantee that $f_t(\vec x)$ converges to $\vec y$ as $t \to \infty$ with probability $\geq 1-\delta$. Indeed, \begin{equation} \frac{d}{dt}\left(\frac{1}{2} \| \vec y - f_t(\vec x) \|_2^2\right) = -(\vec y - f_t(\vec x))^T \hat\Theta_t(\vec x, \vec x) (\vec y - f_t(\vec x)) \leq -\lambda \| \vec y - f_t(\vec x) \|_2^2 \quad \text{w.p. $\geq 1-\delta$}, \end{equation} which gives \begin{equation} \| \vec y - f_t(\vec x) \|_2^2 \leq e^{-2 \lambda t} \| \vec y - f_0(\vec x) \|_2^2 \quad \text{w.p. $\geq 1-\delta$}. \end{equation} One of the first results of this kind concerns ReLU nets with one hidden layer under NTK parameterization: \begin{equation} f(x; a_{1:n}, w_{1:n}) = \frac{1}{\sqrt{n}} \sum_{i=1}^n a_i [w_i^T x]_+. \label{eq:two_layered_ReLU_net_ntk} \end{equation} We aim to minimize square loss on a dataset $(\vec x, \vec y)$ of size $m$ with gradient descent on the input weights: \begin{equation} \dot w_i(t) = \frac{1}{\sqrt{n}} \sum_{k=1}^m (y_k - f(x_k; a_{1:n}, w_{1:n}(t))) a_i [w_i^T(t) x_k > 0] x_k \quad \forall i \in [n]. \end{equation} We sample $w_i \sim \mathcal{N}(0,I_{n_0})$ and $a_i \in U(\{-1,1\})$ $\forall i \in [n]$ independently. The goal of sampling $a_i$ from this particular distribution is mere simplification: in this case $a_i^2 = 1$, which simplifies the NTK Gram matrix a little bit: \begin{equation} \hat\Theta_t(x_k, x_l) = \frac{1}{n} \sum_{i=1}^n [w_i^T(t) x_k > 0] [w_i^T(t) x_l > 0] x_k^T x_l. \end{equation} However, it is possible to apply the same technique to any distribution of the output layer not depending on $n$. Note that the Gram matrix depends merely on activation patterns of the hidden layer computed on the dataset. The limit NTK is therefore given as: \begin{equation} \Theta(x_k, x_l) = \mathbb{E}\,_{w \sim \mathcal{N}(0, I_{n_0})} [w^T x_k > 0] [w^T x_l > 0] x_k^T x_l. \end{equation} Note that in our two-layered case, $\Theta(x,x') = \lim_{n \to \infty} \hat\Theta_t(x,x') = \mathbb{E}\, \hat\Theta_0(x,x')$. In the sequel, we denote the Gram matrices $\hat\Theta_t(\vec x, \vec x)$ as $H(t)$ and $\Theta(\vec x, \vec x)$ as $H^\infty$. Let $\lambda_0$ to be the least eigenvalue of $H^\infty$. \begin{theorem}[\cite{du2018gradient}] Consider the setting discussed above and further assume $\|x_k\|_2 \geq 1$ and $|y_k| \leq 1$ $\forall k \in [m]$. Then $\exists C, C_0 > 0$ such that $\forall \delta \in (0,1)$ taking \begin{equation} n > \max\left( C \frac{m^6}{\lambda_0^4 \delta^3}, \; C_0 \frac{m^2}{\lambda_0^2} \log\left(\frac{2m}{\delta}\right) \right) \end{equation} guarantees $H(t) \geq \lambda_0/2$ $\forall t \geq 0$ w.p. $\geq 1-\delta$. \label{thm:convergence_2layer} \end{theorem} This result implies $\| \vec y - f_t(\vec x) \|_2^2 \leq e^{-\lambda_0 t} \| \vec y - f_0(\vec x) \|_2^2$ w.p. $\geq 1-\delta$, as discussed above. For the full proof, see the original paper \cite{du2018gradient} or lecture notes \cite{golikov2020notes}. We are going to discuss, very briefly, only crucial parts of the proof in the sequel. The proof is based on four lemmas. The first lemma states that as long as $n = \Omega(m^2 \lambda_0^{-2} \log(m/\delta))$, where $\Omega$ hides a certain constant, $\|H(0) - H^\infty\|_2 \leq \lambda_0/4$, where $\|\cdot\|_2$ denotes a singular norm, w.p. $\geq 1-\delta$; this implies $H(0) \geq 3\lambda_0/4$ with the same probability. As already noted above, $\mathbb{E}\, H(0) = H^\infty$. This allows one to apply a concentration inequality to each element of $H(0)$. Union bound then gives a bound that holds uniformly for all elements of $H(0)$. This implies a bound on $\|H(0) - H^\infty\|_F$, hence on a singular norm as well. The second lemma states that as long as $\forall i \in [n]$ $\|w_i - w_i(0)\|_2 \leq R$ for certain $R = R(\delta,\lambda_0,m)$, $\| H - H(0) \|_2 \leq \lambda_0/4$ w.p. $\geq 1-\delta$. In other words, as long as weights are close to initialization, the corresponding Gram matrix is close to the initial one too. The idea is that as long as the weights are not far from their initialization, with certain probability, not many of the hidden neurons can alter their activation patterns on the train dataset. Since as already noted above, our Gram matrices depend only on activation patterns on the train dataset, this implies a tail bound on $|H_{kl}(0) - H_{kl}^\infty|$ $\forall k,l \in [m]$, which gives a tail bound on $\|H(0) - H^\infty\|_2$ with the same technique as used in the first lemma. The third lemma states that as long as $H(s) \geq \lambda_0/2$ $\forall s \in [0,t]$ (we haven't proven it yet), weights indeed stay close to their initialization: $\forall i \in [n]$ $\|w_i(t) - w_i(0)\|_2 \leq R'$ for certain $R' = R'(\lambda_0,m,n)$. This can be proven by a very simple estimate: \begin{multline} \left\|\frac{dw_i(s)}{ds}\right\|_2 = \left\|\frac{1}{\sqrt{n}} \sum_{k=1}^m (y_k - f_s(x_k)) a_i [w_i^T(s) x_k > 0] x_k\right\|_2 \leq \\\leq \frac{1}{\sqrt{n}} \sum_{k=1}^m |y_k - f_s(x_k)| \leq \sqrt{\frac{m}{n}} \|\vec y - f_s(\vec x)\|_2 \leq \sqrt{\frac{m}{n}} e^{-\lambda_0 s / 2} \|\vec y - f_0(\vec x)\|_2. \end{multline} This gives $\forall i \in [n]$: \begin{multline} \| w_i(t) - w_i(0) \|_2 = \left\|\int_0^t \frac{dw_i(s)}{ds} \, ds\right\|_2 \leq \int_0^t \left\|\frac{dw_i(s)}{ds}\right\|_2 \, ds \leq \\\leq \frac{2 \sqrt{m}}{\lambda_0 \sqrt{n}} \left(1 - e^{-\lambda_0 t / 2}\right) \|\vec y - f_0(\vec x)\|_2 \leq \frac{2 \sqrt{m}}{\lambda_0 \sqrt{n}} \|\vec y - f_0(\vec x)\|_2. \end{multline} Finally, the fourth lemma states that as long as $R' < R$, $\| H(t) - H(0) \|_2 \leq \lambda_0/4$ $\forall t \geq 0$ w.p. $\geq 1-\Omega(\delta)$ where $\Omega$ hides a certain constant. Combined with the first lemma, this implies $H(t) \geq \lambda_0/2$ $\forall t \geq 0$ w.p. $\geq 1-\Omega(\delta)$. The condition $R'(\lambda_0,m,n) < R(\delta,\lambda_0,m)$ gives the second lower bound on $n$ (the first one is given be the first lemma). By changing $\delta$, we get the desired result. The fourth lemma is proven as follows. Let $t_0$ be the first moment of time when the second lemma becomes no longer applicable, i.e. $t_0 = \inf\left\{t \geq 0: \; \max_{i \in [n]} \| w_i(t) - w_i(0) \|_2 > R\right\}$. Assume it is finite. Since weights are continuous functions of time, $\max_{i \in [n]} \| w_i(t_0) - w_i(0) \|_2 = R$. Hence the second lemma holds for $w_{1:n} = w_{1:n}(t)$ $\forall t \in [0,t_0]$ and $\| H(t) - H(0) \|_2 \leq \lambda_0/4$ w.p. $\geq 1-\delta$ $\forall t \in [0,t_0]$, therefore $H(t) \geq \lambda_0/2$ w.p. $\geq 1-\Omega(\delta)$ $\forall t \in [0,t_0]$. But then the third lemma holds as well: $\forall i \in [n]$ $\|w_i(t_0) - w_i(0)\|_2 \leq R' < R$; contradiction. Hence $\forall t \geq 0$ $\max_{i \in [n]} \| w_i(t) - w_i(0) \|_2 \leq R$ and the second lemma gives the desired statement. \cref{thm:convergence_2layer} requires the number of hidden units $n$ to grow as $m^6$ with the size of a train dataset and as $\delta^{-3}$ with the failure probability. This bound is way too loose for practical purposes: indeed, even for very small datasets $m \geq 100$ which results in a bound of the order at least $10^8$. If we want the bound to be valid with at least $90\%$ probability, we pay three orders of magnitude more. Note that modern architectures designed to be trained on large datasets like ImageNet ($m=10^6$) have width barely exceeding $10^4$. We state one of the existing improvements of \cref{thm:convergence_2layer} below: \begin{theorem}[\cite{song2019quadratic}] Under the same setting as \cref{thm:convergence_2layer}, $\exists C, C_0 > 0$ such that $\forall \delta \in (0,1)$ taking \begin{equation} n > \max\left( C \frac{m^4}{\lambda_0^4} \log^3\left(\frac{m}{\delta}\right), \; C_0 \frac{m^2}{\lambda_0^2} \log\left(\frac{2m}{\delta}\right) \right) \end{equation} guarantees $H(t) \geq \lambda_0/2$ $\forall t \geq 0$ w.p. $\geq 1-\delta$. \label{thm:convergence_2layer_quartic} \end{theorem} This result decreases the exponent of $m$ from $6$ to $4$ and makes the $\delta$-dependence logarithmic. The proof follows the same path as above. Note however that the previous result aimed for elementwise tail bounds on $H(0) - H^\infty$ or $H - H(0)$ which lead to tail bounds on $\|H(0) - H^\infty\|_2$ and $\|H - H(0)\|_2$ by union bound, which gives an $m^2$ factor. One of the improvements proposed by \cite{song2019quadratic} is to replace these elementwise bounds with matrix-Chernoff bounds --- they do not give this $m^2$ factor, thus leading to better bounds. The other improvement is to replace Markov inequalities that result in $1/\delta$ factors with Bernstein inequality that results only in $\log(1/\delta)$ ones. The $m^4$ width bound is still far from being realistically tight. We are not aware of any further improvements of the results discussed above that apply the idea of NTK stability. Global gradient descent convergence can be, however, proved by first proving gurantees on convergence to local minima and then proving that all minima are global for wide enough nets. See \cite{lee2016gradient,panageas2017gradient,mertikopoulos2020almost} for the first line of works and \cite{yu1995local,nguyen2017loss,nguyen2019connected,nguyen2021note} for the second. None of the works of both lines use the idea of NTK stability and they neither rely on NTK parameterization. \cite{nguyen2019connected} proves that $n = m$ is enough of leaky ReLU nets to have only global "local valleys" (generalization of global minima to certain losses such as cross-entropy) and \cite{nguyen2021note} demonstrates that this bound cannot be improved for two-layered nets and general data. \cite{du2019gradient} extends \cref{thm:convergence_2layer} to deep nets. Their proof idea is the same: first show that $H(0)$ is close to $H^\infty$, then show that $H(t)$ stays close to $H(0)$. However for the multilayer case, $H(0)$ cannot be proven to be close to $H^\infty$ just by concentration of measure. When layers are many, perturbations caused by finite width result in deviations exponential with respect to the number of layers $L$. For this reason, their bound grows exponentially with $L$. See also \cite{allen2019convergence} for a similar result with a bound depending on $m$ only polynomially, proved using a different technique. \subsubsection{Generalization guarantees} Stability of NTK has another interesting consequence. Suppose the empirical NTK is constant, i.e. $\hat\Theta_t = \hat\Theta_0$. It is equivalent to say that the corresponding model is linearized: \begin{equation} f(x; \theta) = f(x; \theta_0) + \nabla_\theta^T f(x; \theta_0) (\theta - \theta_0). \end{equation} For brevity, denote $\vec u_t = f_t(\vec x)$ and $Z_t^{ik} = \partial_{\theta_i} f(x_k; \theta_t)$. Hence $Z_t \in \mathbb{R}^{N \times m}$ where $N$ is the total number of parameters and $\vec u_t = \vec u_0 + Z_0^T (\theta_t - \theta_0)$. Note that $H_t = Z_t^T Z_t$. Recall the train set predictions for constant kernel: \begin{equation} \vec u_t = \vec y + e^{-H_0 t} (\vec u_0 - \vec y). \end{equation} In our linearized dynamics, the weights evolve as follows: \begin{equation} \dot\theta_t = Z_0 (\vec y - \vec u_t) = Z_0 e^{-H_0 t} (\vec y - \vec u_0). \end{equation} Straightforward integration gives: \begin{equation} \theta_t = \theta_0 + Z_0 H_0^{-1} \left(I - e^{-H_0 t}\right) (\vec y - \vec u_0). \end{equation} Recalling $H_0 = Z_0^T Z_0$, at the end of training ($t \to \infty$) we get \begin{equation} \|\theta_\infty - \theta_0\|_2^2 = (\theta_\infty - \theta_0)^T (\theta_\infty - \theta_0) = (\vec y - \vec u_0)^T H_0^{-1} (\vec y - \vec u_0). \end{equation} Define $\mathcal{F}_B^{w_{1:n}(0), a_{1:n}}$ as a set of models of the form (\ref{eq:two_layered_ReLU_net_ntk}) with output weights $a_{1:n}$ and input weights $w_{1:n}$ such that $\| W - W(0) \|_F \leq B$ for given $w_{1:n}(0)$. The above considerations state that a trained model always lies in $\mathcal{F}_B^{w_{1:n}(0), a_{1:n}}$ with $B = (\vec y - \vec u_0)^T H_0^{-1} (\vec y - \vec u_0)$. Hence our training procedure outputs models in a certain set rather than any model in of the form (\ref{eq:two_layered_ReLU_net_ntk}). Upper-bounding Rademacher complexity of this model set will give us a generalization bound as we shall see below. Let us upper-bound the Rademacher complexity conditioned on a dataset $(\vec x, \vec y)$ of size $m$: \begin{multline} \Rad{\mathcal{F}_B^{w_{1:n}(0), a_{1:n}}}{\vec x, \vec y} = \mathbb{E}\,_{\sigma_{1:m} \sim \{-1,1\}^m} \sup_{f \in \mathcal{F}_B^{w_{1:n}(0), a_{1:n}}} \left(\frac{1}{m} \sum_{k=1}^m \sigma_k u_k\right) = \\= \frac{1}{m} \mathbb{E}\,_{\sigma_{1:m} \sim \{-1,1\}^m} \sup_{\| W - W(0) \|_F \leq B} \left(\sum_{k=1}^m \sigma_k \frac{1}{\sqrt{n}} \sum_{i=1}^n a_i [w_i^T(0) x_k \geq 0] w_i^{T} x_k\right) = \\= \frac{1}{m} \mathbb{E}\,_{\sigma_{1:m} \sim \{-1,1\}^m} \sup_{\| W - W(0) \|_F \leq B} \left( \vec\sigma^T Z^{T}(0) \theta \right) = \\= \frac{1}{m} \mathbb{E}\,_{\sigma_{1:m} \sim \{-1,1\}^m} \sup_{\| W - W(0) \|_F \leq B} \left( \vec\sigma^T \tilde Z^{T}(0) (\theta - \theta_0) \right) = \\= \frac{B}{m} \mathbb{E}\,_{\sigma_{1:m} \sim \{-1,1\}^m} \| Z(0) \vec\sigma \|_2 \leq \frac{B}{m} \sqrt{\mathbb{E}\,_{\sigma_{1:m} \sim \{-1,1\}^m} \| Z(0) \vec\sigma \|_2^2} = \frac{B}{m} \| Z(0) \|_F. \end{multline} Note that \begin{equation} \| Z(0) \|_F^2 = \frac{1}{n} \sum_{i=1}^n \sum_{k=1}^m [w_i^T(0) x_k \geq 0]. \end{equation} It is an average of i.i.d random variables, which allows for Hoeffding's inequality: \begin{equation} \mathcal{P}(\| Z(0) \|_F^2 - \frac{m}{2} \geq \epsilon) \leq e^{-2n \epsilon^2 / m^2}. \end{equation} This gives w.p. $\geq 1-\delta$ over initialization, \begin{equation} \| Z(0) \|_F^2 \leq \frac{m}{2} + \sqrt{\frac{m^2}{2n} \log\left(\frac{1}{\delta}\right)}. \end{equation} Finally, we got that w.p. $\geq 1-\delta$ over initialization, \begin{equation} \Rad{\mathcal{F}_B^{w_{1:n}(0), a_{1:n}}}{(\vec x, \vec y)} \leq \frac{B}{\sqrt{m}} \sqrt{\frac{1}{2} + \sqrt{\frac{1}{2n} \log\left(\frac{1}{\delta}\right)}}. \end{equation} Consider zero-one risk: $r(y,z) = [y z < 0]$; we have $R(f) = \mathbb{E}\,_{x,y \sim \mathcal{D}} r(y,f(x))$ and $\hat R(f) = \mathbb{E}\,_{x,y \in S_m} r(y,f(x))$, correspondingly. From the generalization theory, we know that for any $B$ and for any initialization $w_{1:n}(0), a_{1:n}$, w.p. $\geq 1-\tilde\delta$ over the training dataset, $\forall f \in \mathcal{F}_B^{w_{1:n}(0), a_{1:n}}$, \begin{equation} R(f) \leq \hat R_m(f) + \mathbb{E}\,_{(\vec x, \vec y)} \Rad{\mathcal{F}_B^{w_{1:n}(0), a_{1:n}}}{(\vec x, \vec y)} + \sqrt{\frac{1}{2m} \log \frac{1}{\tilde\delta}} \quad \text{w.p. $\geq 1 - \tilde\delta$ over $(\vec x, \vec y)$.} \end{equation} We want to take $B = (\vec y - \vec u_0)^T H_0^{-1} (\vec y - \vec u_0)$ but it depends on the dataset $(\vec x, \vec y)$. Take a sequence $\{B_j\}_{j=1}^\infty$ monotonically increasing to infinity and a sequence $\{\tilde\delta_j\}_{j=1}^\infty$ of deltas $\in (0,1)$ that sum to $\tilde\delta$. This allows us to apply a union bound: w.p. $\geq 1-\tilde\delta$ over the training dataset, for any initialization $w_{1:n}(0), a_{1:n}$, $\forall j \in \mathbb{N}$, $\forall f \in \mathcal{F}_{B_j}^{w_{1:n}(0), a_{1:n}}$, \begin{equation} R(f) \leq \hat R_m(f) + \mathbb{E}\,_{(\vec x, \vec y)} \Rad{\mathcal{F}_{B_j}^{w_{1:n}(0), a_{1:n}}}{(\vec x, \vec y)} + \sqrt{\frac{1}{2m} \log \frac{1}{\tilde\delta_j}}. \end{equation} We are free to choose minimal $j$ such that $B_j \geq (\vec y - \vec u_0)^T H_0^{-1} (\vec y - \vec u_0)$; denote it by $\hat j$. Let for definiteness $B_j = j$. Then $B_{\hat j} \leq 1 + (\vec y - \vec u_0)^T \hat\Theta_0^{-1} (\vec y - \vec u_0)$. Putting all together, we have w.p. $\geq 1-\tilde\delta$ over the training dataset, w.p. $\geq 1-\delta$ over initialization, \begin{multline} R(f(\theta_\infty)) \leq \hat R_m(f(\theta_\infty)) + \\+ \frac{1 + (\vec y - \vec u_0)^T H_0^{-1} (\vec y - \vec u_0)}{\sqrt{m}} \sqrt{\frac{1}{2} + \sqrt{\frac{1}{2n} \log\left(\frac{1}{\delta}\right)}} + \sqrt{\frac{1}{2m} \log \frac{1}{\tilde\delta_{\hat j}}}. \label{eq:generalization_bound_for_fixed_acts_model} \end{multline} Recall that the bound above was obtained under the assumption of constant NTK. In order to relax this assumption, one has to show that, possibly for large enough width, $H_t^{-1}$ stays close to $H_0^{-1}$. Note that when proving global GD convergence we had to prove that $H_t$ stays close to $H_0$, which is different. The required closeness result is proven in \cite{arora2019fine}, it leads to the following theorem: \begin{theorem}[\cite{arora2019fine}] Under the same setting as \cref{thm:convergence_2layer}, $\exists p, C, C_0 > 0$ such that $\forall \delta \in (0,1)$ taking \begin{equation} n > \max\left( C \frac{m^7}{\lambda_0^4 \delta^p}, \; C_0 \frac{m^2}{\lambda_0^2} \log\left(\frac{2m}{\delta}\right) \right) \end{equation} guarantees w.p. $\geq 1-\delta$ over the training dataset of size $m$ and w.p. $\geq 1-\delta$ over initialization, \begin{multline} R(f(\theta_\infty)) \leq \hat R_m(f(\theta_\infty)) + \\+ \frac{1 + (\vec y - \vec u_0)^T \left(H^{\infty}\right)^{-1} (\vec y - \vec u_0)}{\sqrt{m}} \sqrt{\frac{1}{2} + \sqrt{\frac{1}{2n} \log\left(\frac{1}{\delta}\right)}} + \sqrt{\frac{1}{2m} \log \frac{1}{\delta}}. \end{multline} \label{thm:generalization_2layer} \end{theorem} \section{Standard parameterization and kernel evolution} \label{sec:standard_param} \begin{figure} \label{fig:kernel_velocity} \includegraphics[width=0.9\textwidth]{images/kernel_velocity.pdf} \caption{The figure is borrowed from \cite{fort2020deep}.} \end{figure} As was noted in \cref{sec:convergence}, NTK diverges under standard parameterization. Recall the example of a two-layered net: \begin{equation} f(x; a_{1:n}, w_{1:n}) = \sum_{i=1}^n a_i \phi(w_i x), \quad a_{1:n} \sim \mathcal{N}(0, n^{-1} I), \quad w_{1:n} \sim \mathcal{N}(0, I); \end{equation} \begin{equation} \hat\Theta_t(x,x') = \sum_{i=1}^n \left(\phi(w_i(t) x) \phi(w_i(t) x') + a_i^2(t) \phi'(w_i(t) x) \phi'(w_i(t) x') x x'\right). \end{equation} At $t=0$, since $w_i$ are independent and of the order of $O(1)$, the sum diverges proportionaly to $n$. Since under square loss, $\dot f_t(x) = \hat\Theta_t(x,\vec x) (\vec y - f_t(\vec x))$, the model prediction at any point $x$ receive a $O(n)$ increment at the very beginning of training. In other words, model predictions diverge with width, making the model useless for regression. However, if the goal is classification, magnitude of predictions does not matter; what matters is their signs for binary classification, or indices of the largest logits when classes are multiple. Therefore in this case, an infinite-width limit under standard parameterization still may make sense besides of divergent NTK, see \cite{golikov2020dynamically}. In order to deal with divergence, consider a normalized empirical NTK $\tilde\Theta_t(x,x') = \hat\Theta_t(x,x') / n$; its infinite-width limit at initialization is $\mathbb{E}\,_{w \sim \mathcal{N}(0,1)} \phi(w x) \phi(w x')$; we shall refer it as normalized NTK and denote as $\tilde\Theta(x,x')$. In contrast to NTK under NTK parameterization, normalized NTK under standard parameterization evolves with time \cite{golikov2020dynamically}: \begin{multline} \frac{d\tilde\Theta_t(x,x')}{dt} = \frac{1}{n} \sum_{i=1}^n \left(\phi(w_i(t) x) \phi'(w_i(t) x') x' + \phi'(w_i(t) x) \phi(w_i(t) x') x\right) \frac{dw_i(t)}{dt} +\\+ \frac{1}{n} \sum_{i=1}^n a_i^2(t) x x' \left(\phi'(w_i(t) x) \phi''(w_i(t) x') x' + \phi''(w_i(t) x) \phi(w_i(t) x') x\right) \frac{dw_i(t)}{dt} +\\+ \frac{1}{n} \sum_{i=1}^n 2 a_i(t) \phi'(w_i(t) x) \phi'(w_i(t) x') x x' \frac{da_i(t)}{dt}. \end{multline} Recall the gradient flow dynamics under standard parameterization: \begin{equation} \frac{a_k(t)}{dt} = \sum_{j=1}^m \phi(w_k(t) x_j), \quad \frac{w_k(t)}{dt} = \sum_{j=1}^m a_k(t) \phi'(w_k(t) x_j) x_j. \end{equation} At $t=0$, we have $\dot a_k = O(1)$, while $\dot w_k = O(n^{-1/2})$. Since $a_k(0) = O(n^{-1/2})$ and $w_k(0) = O(1)$, it means that for any $t > 0$ independent on $n$, $a_k(t) = O(1)$, $\dot a_k(t) = O(1)$, $w_k(t) = O(1)$, and $\dot w_k(t) = O(1)$. A naive estimate of the sums then gives $\frac{d\tilde\Theta_t(x,x')}{dt} = O(1) + O(1) + O(1) = O(1)$ for any $t > 0$ independent on $n$. Therefore the normalized kernel keeps evolving with time even in the limit of infinite width. This can be the reason for superior performance of neural networks to conventional kernel methods and NTK. A kernel measures similarity between points in a feature space. While for NTK this feature space is fixed, a neural net varies its corresponding kernel feature space, hopefully making it better suitable for the task at hand; moreover, under standard parameterization, this feature does not vanish for large width. The way an empirial NTK varies with time can be measured with kernel velocity, defined as kernel distance between the kernels corresponding to two consequent optimization steps. Kernel distance is in its turn defined as one minus cosine similarity between Gram matrices $H$ and $H'$ of the corresponding kernels: \begin{equation} \rho(H, H') = 1 - \frac{\tr(H H^{\prime,T})}{\sqrt{\tr(H H^T) \tr(H' H^{\prime,T})}}. \end{equation} After measuring kernel velocity for a realistic net under standard parameterization, \cite{fort2020deep} distinguished two phases of training: a phase of rapid kernel evolution, and a phase of almost constant NTK, see Figure~\ref{fig:kernel_velocity}. The first phase is called \emph{chaotic}, while the second one is coined \emph{ordered}. Curiously enough, these two phases can be distinguished not only by kernel velocity. Suppose the network is trained up to time $T$, called \emph{spawn epoch}. Two independent copies of the same network is then trained further. In other words, we train two networks which remain the same up to time $T$ and may diverge afterwards due to randomness of training procedure. We then measure \emph{test error barrier} between these two networks, i.e. height of the error "hill" on a straight segment between their corresponding weights. A small error barrier would mean that training of the two networks ended up in the same valley of test error, which likely means that they are similar. As one can see in Figure~\ref{fig:kernel_velocity}, the test error barrier drops dramatically with growth of spawn epoch. Also, the two quantities under discussion, kernel velocity and error barrier appear to be strongly correlated, see again Figure~\ref{fig:kernel_velocity}. There are also other quantities that experience sharp transition on the border of the two phases: kernel distance between child networks as a function of spawn epoch, ReLU activation Hamming distance, and Hamming distance between responses on the test set; see \cite{fort2020deep} for details. \section{Beyond NTK} \label{sec:beyond} While NTK kernel regression has a natural interpretation of training an infinitely wide neural network under certain parameterization with gradient flow (see \cref{sec:convergence}), NTK is not the only possible kernel that can be constructed using a neural net. \subsection{NNGP kernel} One of the other notable "neural kernels" is the NNGP-kernel \cite{lee2018deep}, defined as $K(x,x') = \mathbb{E}\,_\theta f(x; \theta) f(x'; \theta)$, where $f(\cdot; \theta)$ is a parametric model with weights $\theta$ and scalar output. Suppose $f$ is a neural network with the output layer of the form $f(x) = v^T h(x)$, where $h(x) \in \mathbb{R}^n$ is its last layer representation and $v \sim \mathcal{N}(0, I_n / n)$ independent on $h$. Then $K(x,x') = \frac{1}{n} \mathbb{E}\, h^T(x) h(x')$. As we have seen in \cref{sec:limit} on the example of fully-connected and convolutional nets, the last layer representations tend to iid Gaussians as width go to infinity. In other words, $\forall i \in [n]$ $h^i$ tend to identical and independent Gaussian processes with covariance $\mathbb{E}\, h^i(x) h^i(x') = \frac{1}{n} \mathbb{E}\, h^T(x) h(x')$, which is exactly $K(x,x')$. This motivates the term "NNGP" --- \emph{Neural Network Gaussian Process}. Note that we have already seen the object $\mathbb{E}\, h^i(x) h^i(x')$ in \cref{sec:limit}: when $h = h_l$ --- the $l$-th layer hidden representation of a fully-connected network, the above object is hidden layer covariance $q_l(x,x')$. Therefore the NNGP of this fully-connected network is nothing else but $q_L(x,x')$. This can be generalized to the whole class of architectures expressible by tensor programs: see the Master theorem of \cite{yang2019tensor_i} mentioned in \cref{sec:convergence}. That is, any neuron of any hidden representation of a neural network expressible by a tensor program tends to a Gaussian process. Learning a Gaussian process with zero mean and covariance $K(\cdot,\cdot)$ on a training dataset $(\vec x, \vec y)$ means computing its Bayesian prosterior, which is again a Gaussian with mean $\mu(\cdot \,|\, (\vec x, \vec y))$ and covariance $K(\cdot,\cdot \,|\, (\vec x, \vec y))$ given below: \begin{equation} \mu(x \,|\, (\vec x, \vec y)) = K(x,\vec x) K^{-1}(\vec x, \vec x) \vec y; \end{equation} \begin{equation} K(x,x' \,|\, (\vec x, \vec y)) = K(x,x') - K(x,\vec x) K^{-1}(\vec x, \vec x) K(\vec x,x'). \end{equation} Interestingly, training the last layer of an infinitely wide network with NNGP $K(\cdot,\cdot)$ results in exactly the same Gaussian process. When only the last layer is trained, the NNGP coincides with the NTK. Indeed, an NTK-parameterized NN of width $n$ with readout weights $v$ can be expressed as $f(x) = \frac{1}{\sqrt{n}} v^T h(x)$ with $v \sim \mathcal{N}(0, I_n)$. The empirical NTK is therefore given by $\hat\Theta_0(x,x') = \frac{1}{n} \nabla^T_v (v^T h(x)) \nabla_v (v^T h(x')) = \frac{1}{n} h^T(x) h(x')$, which converges to $\mathbb{E}\, h^i(x) h^i(x') = K(x,x')$ as $n \to \infty$; note that $h(\cdot)$ also depends on $n$. Recall the model prediction dynamics under constant NTK which is $K$ in our case: \begin{equation} f_t(x) = f_0(x) - K(x,\vec x) K^{-1}(\vec x,\vec x) \left(I - e^{-K(\vec x,\vec x) t}\right) (f_0(\vec x) - \vec y). \end{equation} Since $f_0(\cdot)$ is a Gaussian process as discussed before and $K(\vec x,\vec x)$ is deterministic, $f_t(\cdot)$ is a Gaussian process for any $t \geq 0$. Its mean $\mu_t(\cdot)$ and covariance $K_t(\cdot,\cdot)$ are: \begin{equation} \mu_t^{NNGP}(x) = K(x,\vec x) K^{-1}(\vec x,\vec x) \left(I - e^{-K(\vec x,\vec x) t}\right) \vec y; \end{equation} \begin{multline} K_t^{NNGP}(x,x') = K(x,x') +\\+ K(x,\vec x) K^{-1}(\vec x,\vec x) \left(I - e^{-K(\vec x,\vec x) t}\right) K(\vec x,\vec x) \left(I - e^{-K(\vec x,\vec x) t}\right) K^{-1}(\vec x,\vec x) K(\vec x,x') -\\- \left[K(x,\vec x) K^{-1}(\vec x,\vec x) \left(I - e^{-K(\vec x,\vec x) t}\right) K(\vec x,x') + K(x',\vec x) K^{-1}(\vec x,\vec x) \left(I - e^{-K(\vec x,\vec x) t}\right) K(\vec x,x)\right]. \end{multline} It is easy to see that $\mu_t^{NNGP}(x) \to \mu(x \,|\, (\vec x, \vec y))$ and $K_t^{NNGP}(x,x') \to K(x,x' \,|\, (\vec x, \vec y))$ as $t \to \infty$ $\forall x,x'$. If not only the last layer is trained, NNGP does not generally correspond to NTK. The corresponding training dynamics is given by \begin{equation} f_t(x) = f_0(x) - \Theta(x,\vec x) \Theta^{-1}(\vec x,\vec x) \left(I - e^{-\Theta(\vec x,\vec x) t}\right) (f_0(\vec x) - \vec y). \end{equation} While $f_t(\cdot)$ is again a Gaussian process for any $t \geq 0$, its mean and covariance are different. In particular, as $t \to \infty$, they tend to \begin{equation} \mu_\infty^{NTK}(x) = \Theta(x,\vec x) \Theta^{-1}(\vec x,\vec x) \vec y; \end{equation} \begin{multline} K_\infty^{NTK}(x,x') = K(x,x') + \Theta(x,\vec x) \Theta^{-1}(\vec x,\vec x) K(\vec x,\vec x) \Theta^{-1}(\vec x,\vec x) \Theta(\vec x,x') -\\- \left[\Theta(x,\vec x) \Theta^{-1}(\vec x,\vec x) K(\vec x,x') + \Theta(x',\vec x) \Theta^{-1}(\vec x,\vec x) K(\vec x,x)\right]. \end{multline} As was shown in \cite{lee2019wide}, there does not exist an initial covariance matrix (a "prior") such that these mean and covariance correspond to Bayesian posterior given the training data. The "empirical" counterpart of NNGPs is $\hat K(x,x') = \frac{1}{n} h^T(x) h(x')$. Compared to empirical NTKs, empirical NNGPs are easier to compute as they do not require a backward pass. The corresponding memory footprint is also lower for empirical NNGPs as they do not require computing Jacobian matrices that scale as $O(N)$ where $N$ is the number of weights. This makes NNGPs more suitable for large models. As an example, \cite{park2020towards} used performance of empirical NNGPs as a proxy measure for neural architecture search. They argue that first, empirical NTKs are too costly to compute, and second, they provide worse learning signal for their task. NNGP of a generic neural network can be computed in a recursive manner, as was demonstrated in \cref{sec:limit} on the example of fully-connected and convolutional nets: $q_{l+1}(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_l(x,x'))} \phi(z) \phi(z')$, where $\Sigma_l(x,x') = \begin{pmatrix} q_l(x,x) & q_l(x,x') \\ q_l(x',x) & q_l(x',x') \end{pmatrix}$; the Master theorem of \cite{yang2019tensor_i} gives similar fomulas for a generic neural net. In the above example, there is an operation that maps a kernel $q_l(x,x')$ to a subsequent kernel $q_{l+1}(x,x')$. \cite{shankar2020neural} presents an algebra of operations on kernels. While this algebra consists of operations of only three types, it is enough to express NNGP of a fully-connected or a convolutional network with any elementwise nonlinearities. \subsection{Label-aware NTK} One of the major problems of kernel methods is \emph{label agnosticism.} Recall that a kernel evaluated at a pair of points is a scalar product of their mappings to some feaure space: $K(x,x') = \langle \Phi(x), \Phi(x') \rangle$. Therefore a kernel measures how similar the two points are, and a kernel method uses this information to derive responses on unseen data: $f(x) = K(x,\vec x) \vec\alpha$. Intuitively, a kernel $K$ should result in a good-generalizing model if $K(x,x')$ is positive when $y=y'$ and negative otherwise. Therefore the "perfect" kernel would be $K^*(x,x') = y y'$; the obvious problem is that it cannot be computed on unseen data. A kernel that can be computed on unseen data cannot depend on labels. Therefore, if data has several possible labelings, for a pair of data points $(x,x')$, there could be a labeling with $y=y'$ and a labeling with $y\neq y'$. At the same moment, $K(x,x')$ stays the same on both cases; therefore, the corresponding kernel method cannot generalize well on both of the labelings. As an example of several possible labelings on a single dataset, consider a dataset of pictures with two objects in each frame, and let the two objects belong to two disjoint sets of classes. Then one of the labelings may consider only the objects of the first classes set, while the other may consider the objects of the second set. \cite{chen2020label} propose two ways of making a kernel \emph{label-aware.} The first is mixing the kernel at hand with the perfect kernel $K^*(x,x') = y y'$: $K^{HR}(x,x') = (1-\lambda) K(x,x') + \lambda K^*(x,x')$ for $\lambda \in [0,1]$. If the perfect kernel was available, the best choice would be to take $\lambda=1$. Since it is not available, we have to approximate it somehow, therefore making the optimal $\lambda$ to become less than one. In order to approximate $K^*(x,x')$, we need a model that maps $(x,x')$ to $y y'$. Since the training dataset for this model consists $O(m^2)$ samples, and since the model itself has to be evaluated on $O(m)$ samples for each test point $x$, the model has to be relatively simple. \cite{chen2020label} consider models of the form $Z(x,x') = \vec y^T M(x,x',\vec x) \vec y$, where $M \in \mathbb{R}^{m \times m}$. One of the possible choices of $M$ is $M(x,x',\vec x)_{ij} = \psi(K(x,x'),K(x_i,x_j))$, where $\psi(z_1,z_2)$ measures similarity. As one can see, this choice of $Z$ takes a linear combination of $y_i y_j$ with weights being similarities of $K(x,x')$ and $K(x_i,x_j)$. Intuitively, this reads as "$y y'$ and $y_i y_j$ are similar if $K(x,x')$ and $K(x_i,x_j)$ are close". While the above proposal can be applied to any kernel $K$, the second label-aware kernel of \cite{chen2020label} is a specific modification of NTK. Let us recall the construction of $\Theta^{NTH}$ resulted from integrating the learning dynamics up to the order $n^{-1}$, taking the limit of $t \to \infty$, and taking expectation (see \cref{sec:finite_width} and specifically Eq.~(\ref{eq:lantk_nth})): \begin{multline} \Theta^{NTH}(x_1,x_2) = O_{2,0}^{(0)}(x_1,x_2) + n^{-1} \mathbb{E}\, O_{2,\infty}^{(1)}(x_1,x_2) =\\= \Theta(x_1,x_2) + n^{-1} \mathbb{E}\,\left[O_{2,0}^{(1)}(x_1,x_2)\right] - n^{-1} \mathbb{E}\,\left[O_{3,0}^{(1)}(x_1, x_2, \vec x) \Theta^{-1}(\vec x,\vec x) f_0^{(0)}(\vec x)\right] +\\+ n^{-1} \vec y^T \Theta^{-1}(\vec x,\vec x) \mathbb{E}\,\left[O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x)\right] \Theta^{-1}(\vec x,\vec x) \vec y +\\+ n^{-1} \mathbb{E}\,\left[f_0^{(0),T}(\vec x) \Theta^{-1}(\vec x,\vec x) O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x) \Theta^{-1}(\vec x,\vec x) f_0^{(0)}(\vec x)\right] -\\- n^{-1} \sum_{k,l=1}^m \frac{1}{\lambda_k (\lambda_k+\lambda_l)} \vec y^T \vec v_k \vec v_k^T \mathbb{E}\,\left[O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x)\right] \vec v_l \vec v_l^T \vec y -\\- n^{-1} \sum_{k,l=1}^m \frac{1}{\lambda_k (\lambda_k+\lambda_l)} \mathbb{E}\,\left[f_0^{(0),T}(\vec x) \vec v_k \vec v_k^T O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x) \vec v_l \vec v_l^T f_0^{(0)}(\vec x)\right]. \end{multline} Since $\hat\Theta_0(x_1,x_2) = O_{2,0}^{(0)}(x_1,x_2) + n^{-1} O_{2,0}^{(1)}(x_1,x_2) + O(n^{-2})$, we have $\Theta(x_1,x_2) + n^{-1} \mathbb{E}\,\left[O_{2,0}^{(1)}(x_1,x_2)\right] = \mathbb{E}\,\hat\Theta_0(x_1,x_2) + O(n^{-2})$ and $\Theta(x_1,x_2) = \mathbb{E}\,\hat\Theta_0(x_1,x_2) + O(n^{-1})$. For the same reason, $\mathbb{E}\,\left[O_{4,0}(x_1, x_2, x_3, x_4)\right] = n^{-1} \mathbb{E}\,\left[O_{4,0}^{(1)}(x_1, x_2, x_3, x_4)\right] + O(n^{-2})$. Suppose $f_0^{(0)}(\vec x) = 0$. Given this approximation, up to order $O(n^{-2})$, \begin{multline} \Theta^{NTH}(x_1,x_2) \approx \mathbb{E}\,\hat\Theta_0(x_1,x_2) + \vec y^T \left(\mathbb{E}\,\hat\Theta_0(\vec x,\vec x)\right)^{-1} \mathbb{E}\,\left[O_{4,0}(x_1, x_2, \vec x, \vec x)\right] \left(\mathbb{E}\,\hat\Theta_0(\vec x,\vec x)\right)^{-1} \vec y -\\- \sum_{k,l=1}^m \frac{1}{\lambda_k (\lambda_k+\lambda_l)} \vec y^T \vec v_k \vec v_k^T \mathbb{E}\,\left[O_{4,0}(x_1, x_2, \vec x, \vec x)\right] \vec v_l \vec v_l^T \vec y. \end{multline} As one can see, $\Theta^{NTH}(x_1,x_2)$ depends on train labels $\vec y$. Roughly speaking, this kernel corresponds to the NTK of a network trained until convergence ($t\to\infty$); obviously, this kernel should depend on training data. As an interesting observation $\Theta^{NTH}(x_1,x_2) = \mathbb{E}\,\hat\Theta_0(x_1,x_2) + \vec y^T M(x_1,x_2,\vec x) \vec y$ for a certain matrix $M$ --- recall that $K^{(HR)}(x_1,x_2)$ considered previously has a similar form. Note that computing the Gram matrix $\Theta^{NTH}(\vec x, \vec x)$ requires computing the Gram "matrix" of the expected 4-th order empirical kernel $\mathbb{E}\,\left[O_{4,0}(\vec x, \vec x, \vec x, \vec x)\right]$. Instantiating this tensor requires $O(m^4)$ time and $O(m^4)$ memory which is only possible for very small datasets. \section{Limits of applicability} \label{sec:experiments} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{images/experiments/myrtle_avg.png} \caption{ \label{fig:myrtle} Myrtle architecture. } \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{images/experiments/NTK_vs_ENTK_vs_NNGP_CIFAR2_Myrtle_avg_BCE_upto1e4samples.png} \includegraphics[width=0.49\textwidth]{images/experiments/myrtle_avg_flops_CIFAR2.png} \caption{ \label{fig:myrtle_bce_cifar2_time_flops} Myrtle network trained on subsets of CIFAR2 of different sizes. Different lines refer to different regimes of training (e.g. NTK, NNGP etc.) and different stages of training (e.g. cosntructing the kernel, integrating the dynamics etc.). We use BCE loss, and integrate the dynamics numerically for $T=10^4$ steps. We measure training time and number of FLOPS. } \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{images/experiments/NTK_vs_ENTK_vs_NNGP_CIFAR10_Myrtle_avg_BCE.png} \includegraphics[width=0.49\textwidth]{images/experiments/myrtle_avg_BCE_acc_CIFAR10.png} \caption{ \label{fig:myrtle_bce_cifar10_time_accuracy} Myrtle network trained on subsets of CIFAR10 of different sizes. Different lines refer to different regimes of training (e.g. NTK, NNGP etc.) and different stages of training (e.g. cosntructing the kernel, integrating the dynamics etc.). We use cross-entropy loss, and integrate the dynamics numerically for $T=10^4$ steps. We measure training time and accuracy. } \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{images/experiments/NTK_vs_ENTK_vs_NNGP_CIFAR10_Resnet50_NLL.png} \includegraphics[width=0.49\textwidth]{images/experiments/Resnet50_NLL_acc_CIFAR10.png} \caption{ \label{fig:resnet50_nll_cifar10_time_accuracy} Resnet50 trained on subsets of CIFAR10 of different sizes. Different lines refer to different regimes of training (e.g. NTK, NNGP etc.) and different stages of training (e.g. cosntructing the kernel, integrating the dynamics etc.). We use cross-entropy loss, and integrate the dynamics numerically for $T=10^4$ steps. We measure training time and accuracy. } \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{images/experiments/ntk_myrtle_avg_time_vs_input_size_STL2_upto96x96.png} \includegraphics[width=0.49\textwidth]{images/experiments/ntk_myrtle_avg_acc_vs_input_size_STL2_upto96x96.png} \caption{ \label{fig:myrtle_bce_stl2_time_accuracy} Myrtle network trained on a subset of STL2 of size 500 with images of different resolutions. Different lines refer to different regimes of training (e.g. NTK, NNGP etc.) and different stages of training (e.g. cosntructing the kernel, integrating the dynamics etc.). We use BCE loss, and integrate the dynamics numerically for $T=10^4$ steps. We measure training time and accuracy. } \end{figure} In this section, we present a small experimental study on scope of applicability for NTK regression to real-time scenarios. In particular, we would like to investigate first, what is the maximal size $m$ of training dataset of images of given size we can afford with limited computational resources. Second, what is the maximal image resolution $d$ we can afford given fixed dataset size. We restrict ourselves to these two questions since for practical purposes, dependence of NTK regression complexity on these two parameters is the most worrying: it is $O(m^2 d^4)$ for constructing the Gram matrix, $O(m^3)$ for integrating the dynamics analytically, and $O(m^2 T)$ for integrating the dynamics numerically for $T$ steps; see \cref{sec:computations}. We use NeuralTangents \cite{novak2019neural} and perform all our experiments on a single GTX 1080Ti GPU with 12 GiB of memory. We consider a Myrtle network\footnote{\url{https://myrtle.ai/how-to-train-your-resnet-4-architecture/}} with 64 channels in all convolutional layers, see \cref{fig:myrtle}. We pick this architecture because it is lightweight and uses only those layers for which NTK can be computed analytically. For the first experiment, we consider two classes of CIFAR10 and refer this dataset as CIFAR2. We pick a subset of 1000 samples of the original test set of CIFAR2 and vary the size of the training subset. We optimize binary cross-entropy (BCE) and integrate the dynamics numerically for $T = 10^4$. We compute the Gram matrix of a kernel using batch size 4. On \cref{fig:myrtle_bce_cifar2_time_flops}, we plot training time and the number of floating-point operations (FLOPS) for different stages (i.e. Gram matrix computation, integrating the dynamics, inference on a test set) and for different regimes of training (analytical NTK, analytical NNGP, and empirical NTK) versus size of training dataset. As one can see, already for relatively small datasets ($m=10^4$), the most time-demanding stage is construction of the Gram matrix $\Theta(\vec x, \vec x)$ (solid line), but not integration (which is also takes time quadratic to size of the dataset) (dotted line). Also, the time to compute the NNGP kernel is almost the same as the one for NTK, since both are computed analytically; see \cref{sec:limit}. We could not obtain the point $m=10^4$ for empirical NTK (ENTK) due to numerical reasons. If we extrapolate the solid line to $m=10^6$, the size of ImageNet, noting the quadratic growth, we will get $5 \times 10^9$ seconds, which is around 160 years of computations. While our time measurements are device-dependent, we also measure the number of FLOPS, which while being device-independent, grows the same way as time and is also quite large. This experiment demonstrates that indeed, the naive approach for integrating the NTK dynamics falls short on datasets of realistic sizes, thus striving for major optimizations. As mentioned in \cref{sec:computations}, a promising approach could be the one of \cite{meanti2020kernel}. On \cref{fig:myrtle_bce_cifar10_time_accuracy}, we present the same experiment but with all 10 classes of CIFAR10. We observe the same quadratic time growth issue for all three regimes of training (analytical NTK, analytical NNGP, and empirical NTK). We also report accuracy for comparison with previous works on small data training with kernel methods (i.e. \cite{arora2019harnessing}). In addition to experiments with a small network, we experimented with a variant of Resnet50 \cite{he2016deep}. We modify this architecture by removing batch normalizations and substituting max poolings with average poolings, so to make analytical computations possible. Results are shown on \cref{fig:resnet50_nll_cifar10_time_accuracy}. Doing the same extrapolation to ImageNet size, we get $6.25 \times 10^{11}$ seconds, which is around $20000$ years. Lastly, we consider two classes of STL10 and similarly to CIFAR2, refer this dataset as STL2. We pick a subset of 100 samples of the original test set of STL2 and 500 samples of its original train set. While STL10 has fewer labeled examples compared to CIFAR10, it has larger images: $96 \times 96$ for STL10 versus $32 \times 32$ for CIFAR10. We vary size of the input image and measure training time and accuracy, similarly to the first experiment. As before, we optimize binary cross-entropy (BCE) and integrate the dynamics numerically for $T = 10^4$. However, we use batch size 1 for computing the Gram matrix, since larger batch sizes do not fit in GPU memory for large image sizes. Results are shown on \cref{fig:myrtle_bce_stl2_time_accuracy}. As before, the most time-demanding part is kernel Gram matrix computation (blue line): it grows as $O(d^4)$, where $d$ is image resolution; see \cref{sec:limit}. If we extrapolate this line to $d=224$, the resolution on which traditional ImageNet classification models operate, we will get around 150 days of computations. This experiment therefore demonstrates that not only dataset size, but also image resolution complexity can also be a serious bottleneck in applying NTK approach in practice. Also, while for dataset size, certain optimizations are available (e.g. \cite{meanti2020kernel}), we are not aware of any optimizations aiming for decreasing image resolution complexity. \section{Conclusions} The use of NTK theory is twofold: first, it relates neural networks to kernel methods, a far more well-developped class of models. Second, it gives a machine learning practitioner a kernel that shares some properties with neural nets. Recall what we have concerning the first application. We have a theorem (\cref{thm:master_theorem}) that implies that a neural tangent kernel of a wide class of architectures is deterministic and does not evolve with time in the limit of infinite width, and provides a recurrent formula for the limit. Therefore a network that is wide enough should share some properties, i.e. convergence and generalization, see \cref{sec:app_theory}, with the corresponding kernel method. However, the resulting width bounds are far from realistic. Second, the limit kernel does not evolve with time only under certain non-standard parameterization rarely used in practice. In contrast, standard parameterization results in evolving (normalized) kernel, see \cref{sec:standard_param}. The fact that the kernel evolves may be the key to understanding superior performance of neural nets to kernel methods. Unfortunately, we have little understanding of this aspects at the moment. Lastly, \cref{thm:master_theorem} requires Gaussian weight initialization rarely used in practice. Generalizing it to non-Gaussian weight distribution remains to be done in the future. Let us discuss the second application. At the moment of writing, computing the exact limit kernel was available only for convolutional and fully-connected networks with average poolings and nonlinearities in a certain class, see \cref{sec:computations}. For other architectures, one has to rely on empirical NTK which is a biased estimate of the limit one. Computing the empirical NTK requires instantiating output-by-weight jacobians at every pair of training points, which is especially memory risky for realistically large architectures. Storing the Gram matrix of the kernel also requires $O(m^2)$ memory where $m$ is dataset size. Even if the kernel is sucessfully computed on every pair of training points, integrating the training dynamics naively requires inverting the Gram matrix, which costs $O(m^3)$ time, while for datasets of size $10^6$ one can barely afford more than $O(m)$ time and memory. We study applicability limits of this naive approach in \cref{sec:experiments}. Still, certain optimization are available, see \cref{sec:computations}. Also concerning the second application, NTK is not the only kernel that can be constructed using a neural network; certain other kernels may have computational or performance gains compared to NTK, see \cref{sec:beyond}.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,438
{"url":"https:\/\/www.transtutors.com\/questions\/at-raymond-company-the-following-errors-were-discovered-after-the-transactions-had-b-2567815.htm","text":"# At Raymond Company, the following errors were discovered after the transactions had been journali...\n\nAt Raymond Company, the following errors were discovered after the transactions had been journalized and posted. A collection on account from a customer for $930 was recorded as a debit to Cash$930 and a credit to Service Revenue $930 The purchase of store supplies on account for$1, 550 was recorded as a debit to Supplies $1, 180 and a credit to Accounts Payable$1, 180. Prepare the correcting entries. (Credit account titles are automatically indented when amount is entered. Do not indent manually.)","date":"2019-01-16 10:41:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5392020344734192, \"perplexity\": 7613.731731722766}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-04\/segments\/1547583657151.48\/warc\/CC-MAIN-20190116093643-20190116115643-00577.warc.gz\"}"}
null
null
David Marquez: "I wasn't invited back" Mar 18, 2021 - by Steve Gerweck – United Wrestling Network Owner David Marquez announced on his Twitter account today that he won't be appearing on NWA for the promotion's upcoming return to action with Back for the Attack and Powerrr later this month on FITE TV. According to Marquez, he wasn't invited back. You can view his full statement below: "I'd like to thank all the NWA fans who have included me in their enthusiasm for the return of Powerrr. I'm excited too, but I must admit that I will not be appearing on the upcoming episodes because I wasn't invited back. It's a great feeling that the brand and myself are so well associated. I became the NWA Missouri territory owner in 1997 and I've had the best of times (and the absolute worst) representing the promotion all over the globe. If this is my final exit from the company I extend a hearty handshake and appreciation to everyone who has fought to keep this storied institution going and wish the current management continued success. I have many students, close friends, and long-time associates who deserve to benefit from the popularity of this program, especially my production protege Billy Trask. He has a big task ahead of him that I know he will excel at. Of course, I'm not done with pro wrestling, I'm actually busier than ever with 3 weekly United Wrestling Network series and NJPW production." For 24 years, the fans of the @nwa we're there for me. Thank you all for your continued support! pic.twitter.com/BsqcnLYGfX — David Marquez (@CWFHMarquez) March 17, 2021 Josh Matthews Feels Impact's New Strategy is Working WWE hall of famer posts public message to WWE script writers Spoilers: AEW Rampage tapings from 12/22 (Big News) Jeff Jarrett states that Impact Wrestling owns the Broken gimmick Becky Lynch: "The same hopes as I always have, to main event WrestleMania"
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,246
Crystal Bay Casino hosts 1st live music since March Cheyanne Neuffer cneuffer@tahoedailytribune.com Spike McGuire is the founder of Loud as Folk and will be performing this weekend. Provided / Tony Contini CRYSTAL BAY, Nev. — Live music returns this weekend to Crystal Bay Casino for the first time since March. "Loud as Folk" featuring Sam Chase and several other musicians will perform twice in the same night starting at 7 p.m. Saturday Oct. 24. The show will be an intimate, seated and a special live-music experience for those who have been missing in-person events. Musicians performing include Spike McGuire, Greg Gilmore, Dave Berry and Cliff Porter from Jelly Bread and Chase. The early show will be at 7 p.m. and the late show starts at 10:30 p.m. This double show will be the Crystal Bay Club's first show in 231 days — since the pandemic hit Lake Tahoe. On March 17, Nevada Gov. Steve Sisolak ordered the closure of non-essential businesses which completely halted the entertainment and live music industry. The pandemic uprooted many performers who had to cancel shows because public gatherings across the country were prohibited. For months, shows were canceled or indefinitely postponed. Many artists took to performing virtual shows and some spent time focusing on new music. On Sept. 29, Sisolak began to allow gatherings of up to 250 capacity for 50% of venue capacity. While the Crystal Bay Club could potentially have more seating open — the Crown Room's capacity is 550 — they erred on the side of safety by capping capacity at 130 attendees for each of the performances. To attend the event, grab a buddy or date because tickets are sold in bundles of two or four with a staggered entry. The entire party must also enter the venue with the ticket holder. Temperatures will also be taken upon entry and seating will be distanced. During the performance, social distancing will be encouraged. The last performance at the Crystal Bay Club's Crown Room was Mustache Harbor's show in March. Spike McGuire is the founder of the showcase of songwriters dedicated to American roots music, Loud as Folk. McGuire says that this will be Loud as Folk's first official appearance at the Crystal Bay Club. "I am so thrilled," McGuire said. Like several other artists, the pandemic created difficult obstacles for musicians. "It was rough at first," he said. However, McGuire said that while he really missed live performances, the last year has opened more opportunities that he otherwise wouldn't have had time for. McGuire recently recorded a new album that he will perform singles from. "Limitations make it easier to get creative," he said. Greg Gilmore and McGuire had a studio and decided to get innovative during the shutdown by creating the Loud As Folk Record Club where they showcase new artists every month and include Loud As Folk's Early Recordings. One of the singer and songwriters part of the recordings is Sam Chase. Chase has performed at High Sierra, Hardly Strictly Bluegrass Festivals, Outside Lands and will be headlining the upcoming show this weekend. "I think we will fill up all the space that we can safely fill up," McGuire said. The show is for attendees 21 and older and tickets are $15. For more information, visit http://devildogshows.com/crystal-bay-club-casino-events/. Fox & Hound Restaurant – 30 years and still smoking STATELINE, Nev. – On November 8, 1991 Harvey Grime opened the Fox and Hound. Harvey's prior experience in the industry started back in the '70's and included stints at local establishments such as Bitter Creek,… Vail Resorts to offer end-of-season bonus to hourly employees Bertie's Hot Chicken to reopen Tuesday Market Pulse: 2021 Review, 2022 Preview EAT This Week: Bite Tahoe's Lamb Chops
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,997
Q: how to duplicate each row of a mysql table? I've got a table of Product records from an old db that I must change to import on a new php system. They have the form: name, property1_IT, property1_EN, property1_FR I need to transform them by duplicating each row and change it like: name, language, property1 where property1 will have the related translated value. In other words, I'll need one row for each product translation, thus adding a language column and deleting the property translation fields. Is it possible with sql or should I just use php to cicle through the records and doing the duplication tasks? A: You can do it like this: create new_table as select name, 'italian' as language, property1_IT as property1 from old_table union all select name, 'english' as language, property1_EN as property1 from old_table union all select name, 'french' as language, property1_FR as property1 from old_table; A: A a compendium to @tombom's answer, I'd like to add the fact that if the table has a lot of fields (as in my case) and field order is not a problem, another approach can be to simply add translations after all the other fields, and then delete the old translation ones. So, first: CREATE TABLE `V36_test` AS SELECT *, 'IT' AS LANGUAGE, XLS_TIPO_IMPUGNATURA_IT AS XLS_TIPO_IMPUGNATURA, XLS_ROTAZIONE_IT AS XLS_ROTAZIONE, [etc...] FROM OLD_TABLE UNION ALL SELECT *, 'EN' AS LANGUAGE, XLS_TIPO_IMPUGNATURA_EN as XLS_TIPO_IMPUGNATURA, XLS_ROTAZIONE_EN as XLS_ROTAZIONE, [etc...] FROM OLD_TABLE [etc.. other languages...] Then: ALTER TABLE V36_test DROP COLUMN XLS_TIPO_IMPUGNATURA_IT; ALTER TABLE V36_test DROP COLUMN XLS_ROTAZIONE_IT; [etc...] ALTER TABLE V36_test DROP COLUMN XLS_TIPO_IMPUGNATURA_EN; ALTER TABLE V36_test DROP COLUMN XLS_ROTAZIONE_EN; [etc...] With a smart, multi-line code editor like Sublime Text, it's very easy and fast to write a query like this even with a lot more fields.
{ "redpajama_set_name": "RedPajamaStackExchange" }
567
off more than 90 percent of the nation's home mortgages. reconciled with an $819 billion plan the House approved last month. FDIC? How much from TARP? When? Why?'" in Congress) has no one to blame but himself. Ever since the Marxist, E. mostly foreign bankers) dictates America's financial policies. even the President of the United States. less government spending, but when Henry Paulson, secretary of the U.S. In other words, when the Fed says, "Jump!" the President asks, "How high?" two others), the same is true for members of the House and Senate. thereof, and of foreign Coin, and fix the Standard of Weights and Measures." Reserve Notes–is not even legal tender under the U.S. Constitution. simple terms, the Act did not amend or expunge Article. I. Section. 8. make any Thing but gold and silver Coin a Tender in Payment of Debts." has no cosponsors. That's right. No cosponsors. Washington, D.C. Passing Dr. Paul's bill would be a great place to start. This entry was posted on May 21, 2009 at 10:57 pm and is filed under Christianity. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.
{ "redpajama_set_name": "RedPajamaC4" }
3,279
Са́нкт-Петербу́ргский госуда́рственный агра́рный университе́т (до 1992 Ленинградский сельскохозяйственный институт) — петербургское техническое высшее учебное заведение, готовящее специалистов сельского хозяйства. Расположен в Пушкине. Об университете В составе университета 65,5 % преподавателей имеют учёные степени и звания; 15,5 % — доктора наук; 1 академик РАСХН; более 40 человек имеют почётные звания. В университете проходят обучение свыше 8000 студентов. В университете ведётся образовательная деятельность по 36 специальностям, 12 направлениям, 38 программам послевузовского образования (аспирантура, докторантура) и 6 программам дополнительного образования. История университета Начало двадцатого века Стебутовские курсы а в Санкт-Петербурге по инициативе профессора Ивана Александровича Сте́бута, называемого патриархом русского земледелия, открылись Высшие женские сельскохозяйственные курсы. От момента зарождения идеи и до открытия курсов прошло около 10 лет. За заслуги в организации курсов и в честь 50-летия общественной деятельности И. А. Стебута Высшим женским сельскохозяйственным курсам было присвоено название Сте́бутовских. В первый год на Стебутовские курсы были приняты 80 слушательниц. Срок обучения вначале составлял 2 года, вскоре 3, а затем 4 года. Преподавателей было более 40, среди них немало видных учёных: лесоводство преподавал В. В. Гуман, органическую химию — К. И. Дебу, физиологию животных — К. Н. Кржышковский, зоотехнию — Е. Ф. Лискун, анатомию животных — А. Н. Немилов, растениеводство — Н. К. Недокучаев, плодоводство — В. В. Пашкевич, почвоведение — Н. И. Прохоров, систематику и географию растений — В. Н. Сукачёв, сельскохозяйственную статистику — И. В. Чернышев, фитопатологию — А. А. Ячевский. Ста выпускниц было недостаточно для обеспечения потребности отечественного сельского хозяйства в квалифицированных кадрах. Встал вопрос об открытии в столице ещё одних сельскохозяйственных курсов для мужчин и женщин. Каменноостровские курсы а (или, по другим данным, ) в небольшом домике на Каменном острове открылись Петербургские сельскохозяйственные курсы. Впоследствии они стали называться Каменноостровскими. Курсы выпускали агрономов, зоотехников, землеустроителей, организаторов и распорядителей сельских хозяйств и сельскохозяйственной статистики. В первый год сюда поступило 178 человек. За семь лет контингент вырос в 8 раз, и к 1913 году на курсах занималось уже 1400 человек. Первым руководителем Петербургских сельскохозяйственных курсов был Н. П. Адамов, на них преподавали многие видные учёные: Н. Н. Богданов-Катьков, П. А. Борисов, П. В. Будрин, О. А. Вальтер, К. И. Дебу, Н. И. Козлов, С. П. Кравков, К. Н. Кржышковский, С. В. Паращук, В. В. Пашкевич, П. Ю. Шмидт, А. А. Ячевский и другие. Многие из этих учёных преподавали также и на Стебутовских курсах. Вечерние агрономические курсы Было много молодёжи, не имевшей возможности учиться в дневном учебном заведении — днём потенциальные студенты работали. В 1908 году были организованы Вечерние агрономические курсы Общества народных университетов. Эти курсы ставили себе задачу дать слушателям законченное сельскохозяйственное образование, приближающееся к курсу высшего сельскохозяйственного учебного заведения. Учебного хозяйства у курсов не было, а летняя практика заменялась эпизодическими добровольными экскурсиями. 1917—1920 годы В 1918 году все вышеупомянутые учебные заведения были реорганизованы в государственные сельскохозяйственные институты: Стебутовский институт сельского хозяйства и лесоводства, Каменноостровский сельскохозяйственный институт и Петроградский агрономический институт. Директором агрономического института стал В. И. Рыков, а с декабря 1918 года — профессор И. Л. Джандиери. Среди преподавателей были известные учёные: профессора П. А. Борисов, К. И. Васильев, С. Л. Соболев (до 1956 года — зав. кафедрой растениеводства Ленинградского СХИ), К. И. Дебу, М. И. Дьяков, Н. Н. Иванов, В. Г. Котельников, В. Я. Курбатов, Н. А. Наумов, Е. С. Лондон, Б. Г. Тидеман, Е. А. Энгель, С. С. Цветков, Н. Н. Богданов-Катьков (в 1945—1947 годы директор Пушкинского СХИ), П. Н. Штейнберг, С. М. Вислоух и другие. В ноябре 1920 года на кафедру селекции избран Н. И. Вавилов. В 1920 году Петроградский Наркомпрос принимает решение о слиянии Каменноостровского и Стебутовского институтов в одно высшее учебное заведение под названием «Петроградская сельскохозяйственная академия им. И. А. Сте́бута». Двадцатые годы Существование трёх однотипных сельскохозяйственных вузов в Петрограде должно было навести на мысль о нецелесообразности распыления средств и преподавательских сил между тремя учебными заведениями. Уже в 1920—1921 годах возник вопрос о необходимости их объединения. Идея объединения трёх вузов встретила серьёзное сопротивление со стороны некоторой части преподавательского состава. Особенно сильно было противодействие со стороны агрономического института. Это затормозило решение вопроса о слиянии на целый год. Главапрофобр, видя что добровольное слияние трёх вузов невозможно, объявил в июле 1922 года о закрытии их, и персонал был распущен. Одновременно состоялось распоряжение об открытии Петроградского сельскохозяйственного института на базе ранее существовавших трёх сельскохозяйственных вузов. Вновь созданный институт имел три факультета: зоотехнический, растениеводства и сельскохозяйственной экономики и политики. Первоначально в нём обучались 1667 человек, а в 1923 году — уже 2578 человек. Руководить вновь созданным институтом было поручено правлению во главе с ректором, академиком К. Д. Глинкой и проректором по учебной части, профессором К. Н. Кржышковским. К. Д. Глинка — выдающийся учёный-почвовед, талантливый организатор и обаятельный человек, сумел в целом успешно сплотить коллектив вуза и руководил им до своей кончины в 1927 году. Сразу же с момента организации нового института встал вопрос об учебной и производственной базе. Решено было создать такую базу в городе Детское Село. Переезд осуществлялся постепенно. С 1927 года в Детскосельском центре были сосредоточены студенты II, III, IV курсов всех факультетов. В Ленинграде остались лишь первокурсники и IV курс экономического факультета. В Детском Селе было создано 15 опытных станций, которыми руководили крупные учёные. В числе станций были: зоотехническая (проф. М. И. Дьяков), растениеводческая (проф. Н. К. Недокучаев), машиноиспытательная (проф. К. И. Васильев), акклиматизационная (проф. С. А. Эгиз), фитопатологическая (проф. Н. А. Наумов), энтомологическая (проф. Н. Н. Богданов-Катьков), луговодства (проф. Л. А. Чугунов), селекционное поле (проф. Н. И. Вавилов), учебно-опытное садоводство (проф. В. В. Пашкевич и проф. Н. И. Кичунов), льняная опытная станция (В. Ф. Соколов), метеорологическая станция (проф. В. Н. Оболенский). В двадцатые годы в сельском хозяйстве появляются первые «ростки» механизации в массовом масштабе. В Петроградском сельскохозяйственном институте открывается отделение механизации. В Петроградском политехническом институте также открывается (1929 год) факультет индустриального земледелия, впоследствии получивший название факультета индустриального сельского хозяйства. Он готовил инженеров для села, для тракторостроения и отраслей индустрии, перерабатывающих продукцию сельского хозяйства. В 1930 году на базе факультета индустриального земледелия Политехнического института и отделения механизации Ленинградского СХИ был создан Ленинградский институт механизации сельского хозяйства (см. ниже). К 1930 году Ленинградский СХИ превратился в крупнейший многопрофильный вуз, осуществляющий большую учебную и научную работу. Но в моде тогда ещё оставались «решительные преобразования». Для убыстрения подготовки специалистов в области сельского хозяйства был поставлен вопрос о расширении сети аграрных учебных заведений и ускоренной подготовке более узких «отраслевых» специалистов. В таких условиях было не до глубокой общетеоретической подготовки. Начало тридцатых годов В 1930-х годах начинается новая полоса — быстрое дробление сельскохозяйственных вузов, переход их на срок обучения 3,5 года. Произошло разделение Ленинградского сельскохозяйственного института на ряд отраслевых вузов: Институт прядильных культур (Ленинград, Каменный остров), Молочно-огородный институт (Детское Село), Агропедагогический институт (Ленинград), Институт механизации социалистического земледелия (Ленинград). Кроме того, на базе Высших курсов прикладной зоологии и фитопатологии открылся самостоятельный институт защиты растений. Это дробление продолжалось дальше. В 1931 году из Молочно-огородного института выделились два самостоятельных вуза: Ленинградский химико-технологический институт молочной промышленности (в Детском Селе ему было выделено здание бывшей Дворцовой оранжереи на Комсомольской улице, где он находился до начала Великой Отечественной войны), Овощной институт, который был размещён сначала в Павловском дворце, а затем переведён в Знаменку, вблизи Петергофа. После этого разделения Ленинградский молочно-огородный институт получил наименование Ленинградского института социалистического молочного животноводства. За этим институтом сохранились почти все здания Ленинградского СХИ в Детском Селе и земельные угодья. Был момент, когда институт слили с совхозом-гигантом «Колпино». Но уже в 1933 году он вновь стал институтом социалистического молочного животноводства. В 1934 году ему было присвоено наименование Детскосельского, а с 1937 года — Пушкинского сельскохозяйственного института. Шли беспрерывные преобразования и в Ленинградском институте прядильных культур. Его факультет первичной обработки прядильных культур был передан во вновь созданный Ленинградский институт текстильной промышленности. Институту же прядильных культур присваивается наименование Ленинградского сельскохозяйственного института. К нему присоединяют в качестве факультета Агропедагогический институт. В этом институте организовали специальный факультет защиты растений. Такое раздробление крупного института на целый ряд мелких, карликовых вузов значительно осложнило и ослабило дальнейшую работу по подготовке специалистов для сельского хозяйства. С середины 1930-х годов мелкие вузы с узкой специализацией стали тяготеть к более крупным, имеющим хорошую материальную базу, устоявшиеся коллективы. Опять был увеличен срок обучения, поставлена задача подготовки специалистов широкого профиля. В 1934 году происходит слияние трёх институтов (прядильных культур, агропедагогического и института борьбы с вредителями сельскохозяйственных культур) с образованием Ленинградского сельскохозяйственного института. Сельскохозяйственные институты в Ленинграде в тридцатые и сороковые годы В тридцатые—сороковые годы в Ленинграде одновременно работали несколько сельскохозяйственных вузов. Ленинградский сельскохозяйственный институт Помещался на Каменном острове. Институт имел три факультета: агрономический с отделениями полеводства, селекции и семеноводства, экономический и защиты растений. Отдельно существовало и агропедагогическое отделение. Институт насчитывал 27 кафедр. Руководили институтом Н. Я. Кузьмин, А. Ф. Сапегин, Л. А. Чугунов, Л. В. Мысовский (заведующий кафедрой физики в 1939 году). Профессорско-преподавательский коллектив насчитывал 141 человек. Среди них известные учёные: Е. А. Домрачева, О. А. Вальтер, Н. И. Соколов, Г. Я. Бей-Биенко, И. А. Наумов. Учебная практика проводилась в учебно-опытном хозяйстве «Каменка» Лужского района. Ленинградский плодоовощной институт Начал свою деятельность в 1931 году в Павловском дворце, в 1932 году перебазировался в Новый Петергоф в здание Старо-Знаменского дворца. Здесь сложился довольно прочный профессорско-преподавательский коллектив, насчитывавший 38 человек. Здесь работали известные учёные и педагоги: П. П. Кюз, В. В. Пашкевич, Н. И. Кичунов, И. А. Веселовский. Количество студентов — 443 человека, аспирантов — 14. В 1941 году Ленинградский плодоовощной институт был присоединён к сельскохозяйственному институту в Ленинграде на правах факультета. Ленинградский зоотехнический институт Размещался в Красногвардейске. В институте действовал всего один факультет, 16 кафедр. К 1941 году здесь учились 550 человек, работали 50 преподавателей. Большинство из них были совместителями, из-за чего институт испытывал большие трудности. В начале Великой Отечественной войны институт был присоединён в качестве факультета к Ленинградскому ветеринарному институту. Пушкинский сельскохозяйственный институт К 1941 году был наиболее сильным: на 30 кафедрах трёх факультетов (агрономического, зоотехнического и защиты растений) занимались 1 010 студентов; срок обучения составлял 5 лет. Директором института был М. С. Лукьянов, его заместителем — Н. Н. Богданов-Катьков. Среди преподавателей было немало авторитетных учёных: М. И. Дьяков, К. И. Дебу, В. П. Никитин, С. В. Паращук, П. В. Будрин, К. И. Пангало, С. Г. Давыдов, П. А. Борисов, О. В. Троицкая и другие. Учебное хозяйство «Пушкинское» владело 500 га пашни и пастбищ, лесной дачей «Тарасары» площадью 379 га. В 1940 году высокопродуктивное стадо молочного скота давало в среднем на одну фуражную корову 5 048 кг молока, а рекордистки — от 6 до 8,5 тысяч кг в год. В 1941—1944 годах в течение 28 месяцев на территории Пушкинского сельскохозяйственного института шли ожесточённые сражения. Уже к середине августа 1941 года фронт стал быстро приближаться к Пушкину. Нормальная работа в институте прекратилась, встал вопрос о его эвакуации. Решено было перебазироваться в Ленинград, в помещение Высших курсов прикладной зоологии и фитопатологии на улице Чайковского. на территорию агрогородка в Пушкине посыпались немецкие бомбы. 15 сентября студентам и преподавателям было предложено уходить пешком в Ленинград. В Ленинграде оказались 58 студентов (в том числе 25 студентов агрономического факультета, 4 студента факультета защиты растений и 29 студентов зоологического факультета). Учебные занятия проходили в условиях блокады. 20 сентября приступили к занятиям студенты пятого курса, а 10 октября — остальные. В начале декабря 1941 года началась эвакуация Пушкинского СХИ. 7 профессоров и доцентов (наиболее истощённые) с семьями были вывезены самолётами в Подпорожье, оттуда поездом в Вологду. Через неделю они были направлены в Оренбург, а оттуда через Барнаул в город Павловск Алтайского края. 28 и 29 января 1942 года основная часть преподавателей и студентов Пушкинского СХИ во главе с заместителем директора профессором Н. Н. Богдановым-Катьковым была отправлена через «Дорогу жизни» в Вологду и далее в Алтайский край. В Павловске, расположенном в 60 км от Барнаула, с апреля 1942 года Пушкинский институт возобновил учебные занятия на базе местного ветеринарного сельскохозяйственного техникума. Студенты старших курсов были посланы на производственную практику в хозяйства Алтайского края, младшие курсы до 1 сентября проходили учебную практику и работали в учхозе. Осенью 1942 года на учёбу были приняты около 150 человек, главным образом девушки-сибирячки. В феврале 1944 года на базе Пушкинского СХИ в Барнауле был создан Алтайский сельскохозяйственный институт. Пушкинцы помогали в его организации и становлении вплоть до 1946 года. В начале 1944 года, после снятия блокады Ленинграда, Пушкинский СХИ стал готовиться к возвращению домой. В апреле 1944 года небольшая группа преподавателей и сотрудников Пушкинского СХИ во главе с профессором Н. Н. Богдановым-Катьковым вернулась в Ленинград. Студентов ещё не было. Энтузиасты провели в агрогородке огромную работу по разборке и засыпке дотов, блиндажей, выявлению неразорвавшихся бомб, снарядов, мин, разработке территорий, сбору строительных материалов. Образовались строительные бригады. Плотниками руководил доцент В. П. Столяров, стекольщиками — доцент Л. Н. Александрова, малярами — профессор М. М. Лебедев, каменщиками — профессор Г. Н. Павлов. По крупицам собирали приборы, оборудование, инструменты. К 23 кафедры Пушкинского СХИ всем этим были обеспечены удовлетворительно. А в институте начались регулярные занятия. Ленинградский институт механизации сельского хозяйства Вуз был создан как Институт механизации социалистического земледелия в 1930 году на основе факультета индустриального земледелия Политехнического института, в 1933 переименован в Институт инженеров-механиков социалистического земледелия. C началом войны большинство мужчин: преподавателей и студентов — ушли на фронт. В самом начале 1942 года оставшийся коллектив института был эвакуирован в город Зерноград Ростовской области (станция Верблюд). Здесь, на базе имевшегося института механизации сельского хозяйства и совместно с ним была продолжена учёба студентов. Однако вскоре немцы подошли к Ростову и Зернограду. Была спешно организована эвакуация института в Пятигорск. Но первая группа эвакуированных преподавателей и студентов в 1942 году под Пятигорском, во время быстрого наступления немцев, оказалась на короткий срок на оккупированной территории. Другая группа переправилась через Каспийское море, добралась до Алтая и около двух лет жила и работала в Бийске. Отдельные преподаватели, например, заведующий кафедрой ремонта машин , оставались в Ленинграде и эвакуировались самостоятельно. Из эвакуации институт механизации возвратился в 1944 году. Новый директор Б. Г. Турбин вместе с небольшим коллективом преподавателей приложил много усилий, чтобы восстановить институт. Нормальные занятия студентов возобновились в 1945 году. В августе 1954 года вошёл в состав Ленинградского сельхозинститута. После войны В 1947 году зоотехнический факультет из Ленинградского ветеринарного института переводится в Пушкинский СХИ. Ленинградский СХИ объединяется с Пушкинским СХИ на базе последнего. Ректором объединённого Ленинградского сельскохозяйственного института в 1947—1951 годах был Алексей Яковлевич Подвалков. А. Я. Подвалков — выпускник агрономического факультета Ленинградского сельскохозяйственного института. Здесь он был оставлен в аспирантуре на кафедре агрохимии и защитил кандидатскую диссертацию под руководством профессора Н. И. Соколова. Несколько лет работал ассистентом, а затем доцентом кафедры агрохимии ЛСХИ (на Каменном острове). В середине 1930-х годов был деканом организованного в ЛСХИ агропедагогического факультета. В марте 1947 года А. Я. Подвалков был переведён в Ленинградский СХИ из Горького, где он был тогда ректором Горьковского СХИ. Это были очень трудные послевоенные годы. Материальный и кадровый потенциал института был разрушен. Не было помещений, оборудования, кадров. Многие преподаватели либо погибли в блокаду, либо находились в эвакуации. Всё нужно было восстанавливать. В начале 1954 года последовал приказ об объединении Ленинградского института механизации сельского хозяйства с Ленинградским СХИ, и был подписан акт о передаче института механизации в состав Ленинградского СХИ на правах факультета. Директором ЛСХИ, объединённого на базе трёх институтов, в 1951—1959 годах был профессор Валентин Андреевич Брызгалов — крупный учёный-овощевод, один из основоположников овощеводства защищённого грунта. В этот период резко увеличилась площадь сельскохозяйственных угодий учхоза «Пушкинское» после присоединения к нему в 1956 году совхоза «Александровский» с усадьбой в деревне Мыкколово. Учхоз превратился в крупное межотраслевое хозяйство. В 1959—1974 годах директором ЛСХИ работал профессор Константин Николаевич Капорулин (с 1949 года декан факультета механизации, а с директор Ленинградского института механизации сельского хозяйства). В эти годы Ленинградский СХИ стал общепризнанным в стране передовым центром подготовки кадров для сельского хозяйства. Заслуги института в 1971 году были отмечены орденом Трудового Красного Знамени. ректором ЛСХИ был назначен Валентин Митрофанович Кряжков. Инженер-механик по специальности, он внёс большой вклад в теорию и практику восстановления изношенных деталей тракторов и автомобилей. академик В. М. Кряжков был освобождён от занимаемой должности ректора ЛСХИ в связи с избранием вице-президентом ВАСХНИЛ — председателем правления ВАСХНИЛ по Нечернозёмной зоне РСФСР. Восьмидесятые и девяностые годы В феврале 1979 года ректором ЛСХИ стал Николай Филиппович Бондаренко, до того директор Агрофизического НИИ. Он руководил Ленинградским СХИ пятнадцать лет. По его инициативе на всех агрономических факультетах был введён курс программирования урожаев сельскохозяйственных культур. В институте заметно расширилась база для развития компьютерных технологий. На агрономическом факультете был создан компьютерный класс для проведения учебных занятий на основе прикладных программ по ряду специальных дисциплин. В этот период был введён в строй новый химический учебный корпус — для факультета почвоведения и агрохимии и факультета защиты растений. Приказом Госкомиссии Совмина СССР по продовольствию и закупкам от Ленинградский сельскохозяйственный институт (ЛСХИ) был преобразован в Ленинградский государственный аграрный университет (ЛГАУ). , в связи с возвращением Ленинграду названия Санкт-Петербург, ЛГАУ был переименован в Санкт-Петербургский государственный аграрный университет (СПбГАУ). В марте 1987 года приказом ректора университета Н. Ф. Бондаренко четыре факультета: агрохимии и почвоведения, агрономический, плодоовощной и факультет защиты растений были объединены в один факультет — агрономический. Объединение встретило сопротивление части профессорско-преподавательского состава. Объединённый факультет просуществовал 8 лет. В апреле 1994 года на конференции выборщиков ректором СПбГАУ был избран Владимир Степанович Шкрабак, доктор технических наук, профессор, Заслуженный деятель науки и техники РФ. С февраля 1991 года по апрель 1994 года он работал ректором Ярославского СХИ. В этот период по предложению деканов и некоторых заведующих кафедрами в СПбГАУ было принято решение о возвращении к ранее существовавшей системе административного деления. В 1995 году факультеты: почвоведения и агроэкологии, защиты растений и плодоовощной — вышли из состава агрономического факультета и снова стали самостоятельными подразделениями. XXI век В 2003 году B. C. Шкрабак был освобождён от занимаемой должности ректора в связи с достижением 65-летнего возраста (согласно уставу ВУЗа должность ректора не может занимать человек старше 65 лет). Приказом министра сельского хозяйства РФ от исполняющим обязанности ректора СПбГАУ назначен Михаил Алексеевич Новиков. конференция выборщиков из состава сотрудников и студентов избрала профессора, доктора технических наук М. А. Новикова ректором Санкт-Петербургского государственного аграрного университета. Михаил Алексеевич Новиков — выпускник факультета механизации сельского хозяйства ЛСХИ 1980 года. После окончания института — комсомольская работа на факультете, аспирантура, защита кандидатской, работа на кафедре сельхозмашин (от ассистента до профессора), защита докторской диссертации. С 1991 года — заместитель декана факультета механизации, а с 1998 года — декан факультета. Вместе с М. А. Новиковым практически полностью сменилось руководство университета: из семи бывших проректоров остались трое. Введены в состав администрации СПбГАУ: Михаил Владимирович Москалев — первый проректор по социальным и экономическим вопросам; Федор Федорович Ганусевич — проректор по учебной работе; Игорь Зиновьевич Теплинский — проректор по дополнительным и непрерывным формам обучения; Петр Иванович Хохлов — проректор по капитальному ремонту, реконструкции и экологии. Исполняющим обязанности директора учхоза «Пушкинское» назначен выпускник экономического факультета ЛСХИ Сергей Николаевич Широков. В 2004 году университет отметил своё столетие. Много первостепенных задач предстояло решить в юбилейном году и в последующие годы. Вот главные из них: обеспечение эффективной работы сотрудников и учебы студентов совершенствование материальной базы учебного процесса, учебных и производственных практик, переориентирование учебного хозяйства с товарного на учебно-опытное, оснащенное современными средствами по передовым технологиям улучшение социально-экономического положения его сотрудников и студентов университета создание единого, доброжелательного коллектива сотрудников и студентов, способного подготовить на высоком уровне грамотных специалистов для сельского хозяйства С 2005 по 2015 ректором Санкт-Петербургского государственного аграрного университета являлся Виктор Алексеевич Ефимов. С декабря 2015 года ректором Санкт-Петербургского аграрного университета выбран Сергей Николаевич Широков. В отношении декана юридического факультета, профессора, доктора юридических наук Зейналова Исабала Мусы возбуждено уголовное дело за получение им взятки со студентов и избрана мера пресечения в виде заключения под стражу. 29 июля 2020 года новым ректором Санкт-Петербургского государственного аграрного университета был назначен Виталий Юрьевич Морозов - бывший проректор по научной и инновационной работе и по совместительству профессор Ставропольского государственного аграрного университета. Знаменитые выпускники Антонов В. А. Зубков В. А. Дрозденко А. Ю. Красовская И. В. Наумов В. И. Трусов Ю. В. Этуев М. Х. Жебровский Л. С. Хабаров И. Ф. Яхнюк С. В. Паршина В. Р. Баличев А. В. Бойцев А. А. Котов А. А. Дмитренко И. А. Гусев В. С. Брагинец Ю. Н. Уткин О. А. Горячев В. П. Павловский Л. К. Кряжков В. М. Саплицкий Л. Н. Фокин В. В. Секавин Д. А. Студенческий спорт Вуз является участником чемпионатов в рамках розыгрыша Кубка Вузов. Примечания Ссылки () — официальный сайт СПбГАУ
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,978
Q: Unable to get handle on openGL ES variables I'm using the following vertex shader found in a tutorial : "uniform mat4 matriceDessin;" + "uniform mat4 matriceVueCamera;" + "uniform vec3 positionLumiere;" + "attribute vec4 positionsSommets;" + "attribute vec3 vecteursNormaux;" + "attribute vec4 couleursSommets;" + "varying vec4 v_Color;" + "void main() {" + "vec3 sommet,vecteurNormal,directionLumiere;"+ "float distance,diffuse;" + "sommet = vec3(matriceVueCamera * positionsSommets);" + "vecteurNormal = vec3(matriceVueCamera * vec4(vecteursNormaux, 0.0));" + "distance = length(positionLumiere - sommet);" + "directionLumiere = normalize(positionLumiere - sommet);" + "diffuse = max(dot(vecteurNormal, directionLumiere), 0.1);" + "diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));" + "v_Color = couleursSommets;" + "gl_Position = matriceDessin * positionsSommets;" + "}"; To get the location of my variables, I use, after having successfully created and linked the program : GLES31.glUseProgram(programme); pointeurSurPosition = GLES31.glGetAttribLocation(programme, "positionsSommets"); pointeurSurVecteurNormal = GLES31.glGetAttribLocation(programme, "vecteursNormaux"); pointeurSurPositionLumière = GLES31.glGetUniformLocation(programme, "positionLumiere"); pointeurSurMatriceDessin = GLES31.glGetUniformLocation(programme, "matriceDessin"); pointeurSurMatriceVueCaméra = GLES31.glGetUniformLocation(programme, "matriceVueCamera"); The problem I face is that the calls made to retrieve the locations of matriceVueCamera, positionLumiere and vecteursNormaux always return -1 (other location are correctly retrieved). According to the doc, this could happen if the variables aren't actives, but one can notice they are used in the shader, hence they should be considered actives, at least as far as I understand the concept. I tried to bind location to this variables with glBindAttribLocationbefore linking the program but it didn't help. Does anyone sees where my problem could be? I use Open GL ES 3.1. A: The local variable diffuse is calculated by but never use in the program. Since the shader code is optimized by the driver, this causes that all the code is discarded, which is used to calculate diffuse. Since the uniform variables matriceVueCamera, positionLumiere and vecteursNormaux are necessary only, to calculate diffuse, they don't become an active program resources. They are not needed to process the program. Probably you missed to use diffuse, when v_Color is calculated: v_Color = couleursSommets * diffuse ; See OpenGL ES Shading Language 3.00 Specification - 2.12.6 Uniform Variables; page 58: 7.6 Uniform Variables Shaders can declare named uniform variables, as described in the OpenGL ES Shading Language Specification. Values for these uniforms are constant over a primitive, and typically they are constant across many primitives. A uniform is considered active if it is determined by the compiler and linker that the uniform will actually be accessed when the executable code is executed. In cases where the compiler and linker cannot make a conclusive determination, the uniform will be considered active.
{ "redpajama_set_name": "RedPajamaStackExchange" }
576
Thank you for contributing! Before we can merge your Pull-Request here are some guidelines that you need to follow. These guidelines exist not to annoy you, but to keep the code base clean, unified and future proof. ## Coding Standard This project uses [PHP_CodeSniffer](https://github.com/squizlabs/PHP_CodeSniffer) to enforce coding standards. The coding standard rules are defined in the **phpcs.xml.dist** file (part of this repository). The project follows a relaxed version of the Doctrine Coding standards v4. Your Pull-Request must be compliant with the said standard. To check your code you can run `vendor/bin/phpcs`. This command will give you a list of violations in your code (if any). The most common errors can be automatically fixed just by running `vendor/bin/phpcbf`. ## Unit-Tests Please try to add a test for your pull-request. This project uses [PHPUnit](https://phpunit.de/) as testing framework. You can run the unit-tests by calling `vendor/bin/phpunit`. New features without tests can't be merged. ## CI We automatically run your pull request through [Travis CI](https://www.travis-ci.org) and [Scrutinizer CI](https://scrutinizer-ci.com/). If you break the tests, we cannot merge your code, so please make sure that your code is working before opening up a Pull-Request. ## Issues and Bugs To create a new issue, you can use the GitHub issue tracking system. Please try to avoid opening support-related tickets. For support related questions please use more appropriate channels as Q&A platforms (such as Stackoverflow), Forums, Local PHP user groups. ## Getting merged Please allow us time to review your pull requests. We will give our best to review everything as fast as possible, but cannot always live up to our own expectations. Thank you very much again for your contribution!
{ "redpajama_set_name": "RedPajamaGithub" }
5,378
Founded in 1973, Raindirk Audio Ltd is a manufacturer of high-end, pro-audio equipment used in both recording studios and live sound reproduction. Raindirk's first console was sold to former Deep Purple singer Ian Gillan's Kingsway Studios. All products are designed by Cyril Jones. Whilst not maintaining as high a profile as competitors like Solid State Logic, API and AMS Neve, Raindirk consoles have been used on artists as diverse as Pavarotti and Max Bygraves. Competitors Historically, Raindirk has produced high-end, large-format, professional recording studio consoles. As such their main competitors have been:- AMEK (No longer manufactured) Neve API Euphonix Harrison Solid State Logic Studer However, in recent years Raindirk has focussed on high-end live sound applications. A recent development has been the design of high specification DI boxes and microphone pre-amps for use in DAW studios. External links Raindirk Audio official web site Notes Audio equipment manufacturers of the United Kingdom Companies based in Norfolk
{ "redpajama_set_name": "RedPajamaWikipedia" }
673
import styled from 'styled-components'; const GalleryGrid = styled.div` display: grid; grid-template-columns: 1fr 1fr 1fr; ` export default GalleryGrid
{ "redpajama_set_name": "RedPajamaGithub" }
7,523
{"url":"https:\/\/tex.stackexchange.com\/questions\/210596\/macro-value-can-not-be-updated-in-tabular-with-multirow-columns","text":"# macro value can not be updated in tabular with multirow columns\n\nI am writing a command \\newrow to generate one row in a longtable. The problem is that the tabular environment here can not see the value of \\subColWid (output is 0pt). This occurs only when multirow is also used. Can anyone help me to figure out why the updated \\subColWid by \\setlength is invisible in the tabular (including column specification). Thanks.\n\n\\documentclass{article}\n\n\\usepackage{fullpage}\n\\usepackage{longtable}\n\\usepackage{array}\n\\usepackage{multirow}\n\n% define my macros\n\\newlength{\\firstColWid}\n\\newlength{\\secondColWid}\n\\newlength{\\subColWid}\n\\setlength\\firstColWid{0.1\\textwidth}\n\\setlength\\secondColWid{0.9\\textwidth}\n%\\setlength{\\subColWid}{0.5\\secondColWid} % uncomment this line to see correct result\n\n\\setlength{\\tabcolsep}{0pt}\n\n\\newcolumntype{M}[1]{>{\\centering\\arraybackslash}m{#1}}\n\n\\newcommand{\\newrow}[2]{ % a command to produce a row spanning two lines\n% set width for each cell in the second column\n\\setlength\\subColWid{0.2\\secondColWid} % this has effect here but not in the\n%tabular\n% \\the\\subColWid\n\\multirow{2}{*}{#1} & subColWid = \\the\\subColWid \\\\ \\cline{2-2}\n&\n% use a tabular to represent all the remaining values\n\\begin{tabular}[c]{ @{\\extracolsep{\\fill}} *{2}{M{\\subColWid} | }}\n% subColWid = \\the\\subColWid & #2 \\\\\n#2 & \\the\\subColWid \\\\\n\\end{tabular}\n\n}\n\n\\begin{document}\n\\begin{longtable}[c]{|M{\\firstColWid}|@{\\extracolsep{\\fill}} M{\\secondColWid}|} %\n\\hline\n% give row number and another cell\n\\newrow{1}{Row1} \\\\ \\hline\n\\newrow{2}{Row2} \\\\\n\\hline\n\n\\end{longtable}\n\\end{document}\n\n\u2022 Welcome to TeX.SX! The problem probably is, that you try to set lengths in a table cell ... this is possible, but a table cell is a TeX group and as such, lengths settings hold only within a group, not outside, i.e. they will not survive the change from one cell to the next one \u2013\u00a0user31729 Nov 5 '14 at 4:48\n\u2022 Your comment makes much sense for me. But I am still trying to understand the effect scope of TeX macros using the rules extablished by other languages such as C and Perl. Therefore, I guess that \\subColWid is a global macro for the document, and that it can be changed by any commands\/macros and the effect propogates to other macros\/commands using the macro. Your explanation falsifies my guessing. Does it mean each cell group makes a local copy when it changes the value? Thank you very much. \u2013\u00a0ZZG Nov 5 '14 at 5:15\n\u2022 It would be better to think that \\subColWid is assigned a value which is realized when it's expanded. LaTeX (and TeX) have only two levels: global and local. By which I mean, if it's not local, then it's global. Here a longish explanation. \u2013\u00a0A.Ellett Nov 5 '14 at 5:40\n\nYou might think to prefix \\setlength with \\global and expect to get the effect you want. And in this particular case, this seems to work. But not generally. See the update below for more details.\n\n\\documentclass{article}\n\n\\usepackage{fullpage}\n\\usepackage{longtable}\n\\usepackage{array}\n\\usepackage{multirow}\n\n% define my macros\n\\newlength{\\firstColWid}\n\\newlength{\\secondColWid}\n\\newlength{\\subColWid}\n\\setlength\\firstColWid{0.1\\textwidth}\n\\setlength\\secondColWid{0.9\\textwidth}\n%\\setlength{\\subColWid}{0.5\\secondColWid} % uncomment this line to see correct result\n\n\\setlength{\\tabcolsep}{0pt}\n\n\\newcolumntype{M}[1]{>{\\centering\\arraybackslash}m{#1}}\n\n\\newcommand{\\newrow}[2]{ % a command to produce a row spanning two lines\n% set width for each cell in the second column\n\\global\\setlength\\subColWid{0.2\\secondColWid} % this has effect here but not in the\n%tabular\n% \\the\\subColWid\n\\multirow{2}{*}{#1} & subColWid = \\the\\subColWid \\\\ \\cline{2-2}\n&\n% use a tabular to represent all the remaining values\n\\begin{tabular}[c]{ @{\\extracolsep{\\fill}} *{2}{M{\\subColWid} | }}\n% subColWid = \\the\\subColWid & #2 \\\\\n#2 & \\the\\subColWid \\\\\n\\end{tabular}\n\n}\n\n\\begin{document}\n\n\\begin{longtable}[c]{|M{\\firstColWid}|@{\\extracolsep{\\fill}} M{\\secondColWid}|} %\n\\hline\n% give row number and another cell\n\\newrow{1}{Row1} \\\\ \\hline\n\\newrow{2}{Row2} \\\\\n\\hline\n\n\\end{longtable}\n\\end{document}\n\n\nOf course, that will effect how \\subColWid will behave in the remainder of the document. If you need the old value, you should do something like\n\n\\newlength\\myoldlength\n\\setlength\\myoldlength{\\subColWid}\n\n\nand once you're done with your table you can restore the old value using\n\n \\setlength\\subColWid{\\myoldlength}\n\n\nUPDATE\n\nAfter reading around a bit here (on account of what @egreg wrote regarding the use of \\global and \\setlength here [see the end of his discussion to see the relevant comment]), my use of \\global\\setlength above works only idiosyncratically within LaTeX. A better solution is offered by @AndrewSwann here. So it seems a better approach to getting \\subColWid to have the desired value, when changing it from within the table, is to do define your command as follows:\n\n\\newcommand{\\newrow}[2]{ % a command to produce a row spanning two lines\n% set width for each cell in the second column\n\\global\\subColWid=\\dimexpr0.2\\secondColWid\\relax % this has effect here but not in the\n%tabular\n% \\the\\subColWid\n\\multirow{2}{*}{#1} & subColWid = \\the\\subColWid \\\\ \\cline{2-2}\n&\n% use a tabular to represent all the remaining values\n\\begin{tabular}[c]{ @{\\extracolsep{\\fill}} *{2}{M{\\subColWid} | }}\n% subColWid = \\the\\subColWid & #2 \\\\\n#2 & \\the\\subColWid \\\\\n\\end{tabular}\n}\n\n\u2022 Thank you so much for the quick and great answer. It works now. Could you please tell me a bit more Tex mechanism why my original command does not work and why your suggestion works? I am very confused by Tex Macros and have read many posts, ending up without understanding why my command fails. \u2013\u00a0ZZG Nov 5 '14 at 5:04\n\u2022 There are numerous posts on this site about macros which will be much more helpful than anything I could possibly write here. What took me a long time to get clear on was how macro work via expansion: my mind kept wanting to think of them functionally like in a Perl script or something---that's just completely wrong headed and ultimately gets you into all kind of trouble. \u2013\u00a0A.Ellett Nov 5 '14 at 5:33\n\u2022 What's happening here with your table is an issue of grouping. The cells in the tables are individually grouped. So your assignment of a new value to the length is taking place locally and is not visible outside of the group within which the assignment is taking place. There are various commands for breaking out of a local context: in this situation we can happily apply a \\global prefix to the assignment to make the new value visible outside of the current group. \u2013\u00a0A.Ellett Nov 5 '14 at 5:35\n\u2022 Thank you for updating the approach. Just like you mentioned, I always think LaTex macros as Perl variables, which leads me to troubles. Could you please refer me to some materials which I can understand how macros are expanded? In particular, I do not understand why changing value of global macro (defined in preamble) in one group does not have effect in other groups, unless the same-name macro in different groups refer to different things. \u2013\u00a0ZZG Nov 5 '14 at 18:13\n\u2022 I would recommend you read the section on token expansion from TeX by Topic. This very useful guide comes with most LaTeX distributions. If you have access to the command line, you can open this document by running texdoc topic. I'd recommend you read the sections on macro expansion. Otherwise, this is a very broad topic and is probably best answered via another question (or the next time you get stuck). \u2013\u00a0A.Ellett Nov 5 '14 at 19:15","date":"2019-10-17 15:05:40","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9640848636627197, \"perplexity\": 1641.5037243374363}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986675409.61\/warc\/CC-MAIN-20191017145741-20191017173241-00340.warc.gz\"}"}
null
null
Q: How to debug why application dies exception-less, when another application is closed? I'm fixing bugs on an application, that is kind of data consumer/worker, getting data from third party application, using supplied API and libraries for doing so. It's c++ based API and the .net application is using a bit of c++ to access the libraries. Also - the application is multi-threaded, it's windowed (Winforms), uses several third party libraries (nhibernate, mysql and others). It might be relevant to add, that our consumer thread is the only place in the code, when it accesses the c++ library. The problem? When the producent application is closing (takes a bit more time, more than a minute), consumer application dies within seconds, without error/exception - even thought they're opened independently. No information in Event Log, no Dr. Watson action, no exceptions in Visual Studio (debug just stops). I've tried: * *Stepping throughout the code to see the moment, where it closes, but it always happened in different places, was it calling the producent's libraries code, or not. *Debugged the application with Visual Studio configured to break on any exception throwing - but it dies without a thing. *Creating crash dumps (using ADPlus.vbs) and using windbg on them (I'm new to such low-level debugging, though), but !analyze resulted with different stack traces - leaving me traceless. What would be the good direction to find out why the consumer application dies? Is there a way, to get around the problem (like showing a prompt message to the user, like: "Producent application is closing, consumer application will do the same!")? [EDIT] Consumer application is multi-threaded, and it's one consumer thread as addon to UI thread. Also - the third party app we're using as producer uses COM to send information to any consumer app (aka add-on). Me and my coworker decided to comment out some code, to find the code, that possibly makes the problem. And probably we've found it - the application dies if and only if we've registered our consumer to producer. After reading documentation for the third party app, it turned out that consumer apps have to actively query for message of closing the producer, otherwise they would be forcefully terminated by the producer app. So: 95% that the problem is third party application which we're querying for data is sending COM message to forcefully terminate our application (I'll post info / change to wiki, if we'd test it's the only reason). A: The general scenario described here is a source for a very common confusion and misunderstanding related to cases where one tries to understand 'how come my application vanished into thin air without leaving any trace?'. The immediate assumtion would be: my application 'died' or 'crashed' or 'encountered such unexpected exception, which is even not visible to the debugger and thus did not create any dump-file. Happened to me few good times... The real answer in most cases would be that the application did not realy crash or die and did not receive any excpetion, but was simply shutted-down gracefully, but from a flow that I did not expect. The easiest way to debug such cases will be to put a breakpoint in kernel32!ExitProcess and to follow the stack and see how we got here. Hope this helps A: It turns out, that its the host application, that kills my application. The proper way to debug the problem was to spy on windows messages and to see, that my application is getting Process Terminate message.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,691
pTask lib_FindTask(SysBase *SysBase, STRPTR name); pTask lib_CreateTask(SysBase *SysBase, STRPTR name, Task_Function codeStart, APTR data, UINT32 stackSize, INT8 pri); INT8 lib_SetTaskPri(SysBase *SysBase, struct Task *task, INT8 pri); SysCall lib_ReadyTask(SysBase *SysBase, pTask task, BOOL resch); SysCall lib_Forbid(SysBase *SysBase); SysCall lib_Permit(SysBase *SysBase); //signal.c Signal lib_WaitSignal(SysBase *SysBase, Signal signalSet); Signal lib_SetSignal(SysBase *SysBase, Signal newSignals, Signal signalSet); SysCall lib_Signal(SysBase *SysBase, pTask task, Signal signalSet); //semaphores.c SysCall lib_InitSemaphore(struct SysBase *SysBase, pSignalSemaphore signalSemaphore); SysCall lib_AddSemaphore(struct SysBase *SysBase, const char *semName, struct SignalSemaphore *sigSem); SysCall lib_RemSemaphore(struct SysBase *SysBase, struct SignalSemaphore *sigSem); pSignalSemaphore lib_FindSemaphore(struct SysBase *SysBase, const char *name); SysCall lib_ObtainSemaphore(struct SysBase *SysBase, pSignalSemaphore sigSem); SysCall lib_AttemptSemaphore(struct SysBase *SysBase, struct SignalSemaphore *signalSemaphore); SysCall lib_ReleaseSemaphore(struct SysBase *SysBase, pSignalSemaphore sigSem); //schedule.c void lib_Reschedule(SysBase *SysBase); SysCall lib_YieldCPU(SysBase *SysBase); //resident.c APTR lib_InitResident(SysBase *SysBase, struct Resident *resident, APTR segList); SysCall lib_InitResidentCode(struct SysBase *SysBase, UINT32 startClass); //rawio.c INT32 lib_RawMayGetChar(struct SysBase *SysBase); void lib_RawPutChar(struct SysBase *SysBase, UINT8 chr); void lib_RawIOInit(struct SysBase *SysBase); va_list lib_RawDoFmt(struct SysBase *SysBase, const char *fmt, va_list ap, void (*PutCh);(INT32, APTR);, APTR PutChData); //ports.c SysCall lib_AddPort(SysBase *SysBase, pMsgPort msgPort); pMsgPort lib_FindPort(SysBase *SysBase, STRPTR name); SysCall lib_RemPort(SysBase *SysBase, pMsgPort msgPort); pMessage lib_WaitPort(SysBase *SysBase, pMsgPort msgPort); pMsgPort lib_CreateMsgPort(SysBase *SysBase); SysCall lib_DeleteMsgPort(SysBase *SysBase, pMsgPort msgPort); pMessage lib_GetMsg(SysBase *SysBase, pMsgPort msgPort); SysCall lib_PutMsg(SysBase *SysBase, pMsgPort msgPort, pMessage msg); SysCall lib_ReplyMsg(SysBase *SysBase, pMessage msg); //memory.c APTR lib_Allocate(SysBase *SysBase, pMemHeader mh, UINT32 size); void lib_Deallocate(SysBase *SysBase, pMemHeader mh, void *p); pMemHeader _CreateMemoryHead(UINT32 start_addr, UINT32 end_addr, UINT32 attr); void lib_AddMemList(SysBase *SysBase, UINT32 size, UINT32 attribute, INT32 pri, APTR base, STRPTR name); APTR lib_AllocVec(SysBase *SysBase, UINT32 byteSize, UINT32 requirements); void lib_FreeVec(SysBase *SysBase, APTR memoryBlock); UINT32 lib_AvailMem(SysBase *SysBase, UINT32 attributes); APTR lib_CopyMemQuick(SysBase *SysBase, const APTR src, APTR dest, int n); APTR lib_CopyMem(SysBase *SysBase,const APTR src, APTR dest, int n); APTR lib_MemSet(SysBase *SysBase, void* m, int c, UINT32 len); //list.c void lib_AddHead(SysBase *SysBase, pList list, pNode node); void lib_AddTail(SysBase *SysBase, pList list, pNode node); void lib_Enqueue(SysBase *SysBase, pList list, pNode node); void lib_Insert(SysBase *SysBase, pList list, pNode node, pNode pred); void lib_NewList(SysBase *SysBase, pList list); void lib_NewListType(SysBase *SysBase, pList list, UINT8 type); void lib_Remove(SysBase *SysBase, pNode node); pNode lib_RemTail(SysBase *SysBase, pList list); pNode lib_RemHead(SysBase *SysBase, pList list); pNode lib_FindName(SysBase *SysBase, pList liste, STRPTR name); //libraries.c struct Library *lib_OpenLibrary(SysBase *SysBase, STRPTR libName, UINT32 version); SysCall lib_CloseLibrary(SysBase *SysBase,struct Library *library); SysCall lib_AddLibrary(SysBase *SysBase,struct Library *library); SysCall lib_RemLibrary(SysBase *SysBase, struct Library *library); void lib_SumLibrary(SysBase *SysBase, struct Library *library); SysCall lib_DisposeLibrary(SysBase *SysBase, struct Library* library); APTR lib_SetFunction(struct SysBase *SysBase, struct Library *library, INT32 funcOffset, APTR newFunction); struct Library *lib_MakeLibrary(SysBase *SysBase, APTR funcTable, APTR structInit, UINT32(*libInit);(struct Library*,APTR, struct SysBase*);, UINT32 dataSize, UINT32 segList); SysCall lib_MakeFunctions(SysBase *SysBase, APTR target, APTR functionArray); //devices.c SysCall lib_AddDevice(SysBase *SysBase, struct Device *device); SysCall lib_RemDevice(SysBase *SysBase, struct Device *device); SysCall lib_CloseDevice(SysBase *SysBase, pIOStdReq iORequest); SysCall lib_OpenDevice(struct SysBase *SysBase, STRPTR devName, UINT32 unitNum, pIOStdReq iORequest, UINT32 flags); pIOStdReq lib_CreateIORequest(SysBase *SysBase, pMsgPort ioReplyPort); SysCall lib_DeleteIORequest(SysBase *SysBase, pIOStdReq iorequest); SysCall lib_AbortIO(SysBase *SysBase, pIOStdReq iORequest); pIORequest lib_CheckIO(SysBase *SysBase, pIOStdReq iORequest); SysCall lib_DoIO(SysBase *SysBase, pIOStdReq iORequest); SysCall lib_SendIO(SysBase *SysBase, pIOStdReq io); SysCall lib_WaitIO(SysBase *SysBase, struct IORequest *iORequest); //interrupts.c void lib_Enable(struct SysBase *SysBase); UINT32 lib_Disable(struct SysBase *SysBase); void lib_Restore(struct SysBase *SysBase, UINT32 ipl); void lib_AddIntServer(SysBase *SysBase, UINT32 intNumber, struct Interrupt *isr); void lib_RemIntServer(SysBase *SysBase, UINT32 intNumber, struct Interrupt *isr); pInterrupt lib_SetExcVector(SysBase *SysBase, UINT32 excNumber, struct Interrupt *isr); pInterrupt lib_CreateIntServer(SysBase *SysBase, const STRPTR name, INT8 pri, APTR handler, APTR data);
{ "redpajama_set_name": "RedPajamaGithub" }
5,910
select to_days(time_format('10:17:66', '%H:%i:%s')); select to_days(to_char(datetime'1111-10-55 12:23:22.111')); --argument of time type select to_days(adddate(date'20/12/2010', 20)); select to_days(subdate(date'04/45/2010', 20)); select to_days(add_months(date'04/12/20105', -6)); select to_days(date('2001-10-188 10:14:25')); select to_days(last_day('1898-15-06')); select to_days(str_to_date('a,b,c', '%m,%d,%y')); select to_days(to_date(2000)); --argument of timestamp type select to_days(to_timestamp('19:39:45 BM 12/12/2012')); --argument of datetime type select to_days(timestamp('2010-10-28', 'abcabcabc')); select to_days(to_datetime(123456)); --server side create table too(str1 int, str2 string); insert into too values(1234, 'aabbcc'); select to_days(to_char(to_timestamp(str1))) from too; select to_days(to_datetime(str1)) from too; select to_days(to_timestamp(str1)) from too; select to_days(to_time(str2)) from too; select to_days(time_format(str2, '%H:%i:%s')) from too; drop table too;
{ "redpajama_set_name": "RedPajamaGithub" }
5,970
/// <reference path="../localtypings/blockly.d.ts" /> /// <reference path="../built/pxtlib.d.ts" /> namespace pxt.blocks { export let promptTranslateBlock: (blockId: string, blockTranslationIds: string[]) => void; export interface GrayBlock extends Blockly.Block { setPythonEnabled(enabled: boolean): void; } export interface GrayBlockStatement extends GrayBlock { domToMutation(xmlElement: Element): void; mutationToDom(): Element; getLines: () => string[]; declaredVariables: string; } // Parsed format of data stored in the .data attribute of blocks export interface PXTBlockData { commentRefs: string[]; fieldData: pxt.Map<string>; } const typeDefaults: Map<{ field: string, block: string, defaultValue: string }> = { "string": { field: "TEXT", block: "text", defaultValue: "" }, "number": { field: "NUM", block: "math_number", defaultValue: "0" }, "boolean": { field: "BOOL", block: "logic_boolean", defaultValue: "false" }, "Array": { field: "VAR", block: "variables_get", defaultValue: "list" } } // Add numbers before input names to prevent clashes with the ones added by BlocklyLoader export const optionalDummyInputPrefix = "0_optional_dummy"; export const optionalInputWithFieldPrefix = "0_optional_field"; // Matches arrays export function isArrayType(type: string): string { const arrayTypeRegex = /^(?:Array<(.+)>)|(?:(.+)\[\])|(?:\[.+\])$/; let parsed = arrayTypeRegex.exec(type); if (parsed) { // Is an array, returns what type it is an array of if (parsed[1]) { // Is an array with form Array<type> return parsed[1]; } else { // Is an array with form type[] return parsed[2]; } } else { // Not an array return undefined; } } // Matches tuples export function isTupleType(type: string): string[] { const tupleTypeRegex = /^\[(.+)\]$/; let parsed = tupleTypeRegex.exec(type); if (parsed) { // Returns an array containing the types of the tuple return parsed[1].split(/,\s*/); } else { // Not a tuple return undefined; } } const primitiveTypeRegex = /^(string|number|boolean)$/; type NamedField = { field: Blockly.Field, name?: string }; // list of built-in blocks, should be touched. let _builtinBlocks: Map<{ block: Blockly.BlockDefinition; symbol?: pxtc.SymbolInfo; }>; export function builtinBlocks() { if (!_builtinBlocks) { _builtinBlocks = {}; Object.keys(Blockly.Blocks) .forEach(k => _builtinBlocks[k] = { block: Blockly.Blocks[k] }); } return _builtinBlocks; } export const buildinBlockStatements: Map<boolean> = { "controls_if": true, "controls_for": true, "pxt_controls_for": true, "controls_simple_for": true, "controls_repeat_ext": true, "pxt_controls_for_of": true, "controls_for_of": true, "variables_set": true, "variables_change": true, "device_while": true } // Cached block info from the last inject operation let cachedBlockInfo: pxtc.BlocksInfo; // blocks cached interface CachedBlock { hash: string; fn: pxtc.SymbolInfo; block: Blockly.BlockDefinition; } let cachedBlocks: Map<CachedBlock> = {}; export function blockSymbol(type: string): pxtc.SymbolInfo { let b = cachedBlocks[type]; return b ? b.fn : undefined; } export function createShadowValue(info: pxtc.BlocksInfo, p: pxt.blocks.BlockParameter, shadowId?: string, defaultV?: string): Element { defaultV = defaultV || p.defaultValue; shadowId = shadowId || p.shadowBlockId; if (!shadowId && p.range) shadowId = "math_number_minmax"; let defaultValue: any; if (defaultV && defaultV.slice(0, 1) == "\"") defaultValue = JSON.parse(defaultV); else { defaultValue = defaultV; } if (p.type == "number" && shadowId == "value") { const field = document.createElement("field"); field.setAttribute("name", p.definitionName); field.appendChild(document.createTextNode("0")); return field; } const isVariable = shadowId == "variables_get"; const isText = shadowId == "text"; const value = document.createElement("value"); value.setAttribute("name", p.definitionName); const isArray = isArrayType(p.type); const shadow = document.createElement(isVariable || isArray ? "block" : "shadow"); value.appendChild(shadow); const typeInfo = typeDefaults[isArray || p.type]; shadow.setAttribute("type", shadowId || (isArray ? 'lists_create_with' : typeInfo && typeInfo.block || p.type)); shadow.setAttribute("colour", (Blockly as any).Colours.textField); if (isArray) { // if an array of booleans, numbers, or strings if (typeInfo && !shadowId) { let fieldValues: string[]; switch (isArray) { case "number": fieldValues = ["0", "1"]; break; case "string": fieldValues = ["a", "b", "c"]; break; case "boolean": fieldValues = ["FALSE", "FALSE", "FALSE"]; break; } buildArrayShadow(shadow, typeInfo.block, typeInfo.field, fieldValues); return value; } else if (shadowId && defaultValue) { buildArrayShadow(shadow, defaultValue); return value; } } if (typeInfo && (!shadowId || typeInfo.block === shadowId || shadowId === "math_number_minmax")) { const field = document.createElement("field"); shadow.appendChild(field); let fieldName: string; switch (shadowId) { case "variables_get": fieldName = "VAR"; break; case "math_number_minmax": fieldName = "SLIDER"; break; default: fieldName = typeInfo.field; break; } field.setAttribute("name", fieldName); let value: Text; if (p.type == "boolean") { value = document.createTextNode((defaultValue || typeInfo.defaultValue).toUpperCase()) } else { value = document.createTextNode(defaultValue || typeInfo.defaultValue) } field.appendChild(value); } else if (defaultValue) { const field = document.createElement("field"); field.textContent = defaultValue; if (isVariable) { field.setAttribute("name", "VAR"); shadow.appendChild(field); } else if (isText) { field.setAttribute("name", "TEXT"); shadow.appendChild(field); } else if (shadowId) { const shadowInfo = info.blocksById[shadowId]; if (shadowInfo && shadowInfo.attributes._def && shadowInfo.attributes._def.parameters.length) { const shadowParam = shadowInfo.attributes._def.parameters[0]; field.setAttribute("name", shadowParam.name); shadow.appendChild(field); } } else { field.setAttribute("name", p.definitionName); shadow.appendChild(field); } } let mut: HTMLElement; if (p.range) { mut = document.createElement('mutation'); mut.setAttribute('min', p.range.min.toString()); mut.setAttribute('max', p.range.max.toString()); mut.setAttribute('label', p.actualName.charAt(0).toUpperCase() + p.actualName.slice(1)); if (p.fieldOptions) { if (p.fieldOptions['step']) mut.setAttribute('step', p.fieldOptions['step']); if (p.fieldOptions['color']) mut.setAttribute('color', p.fieldOptions['color']); if (p.fieldOptions['precision']) mut.setAttribute('precision', p.fieldOptions['precision']); } } if (p.fieldOptions) { if (!mut) mut = document.createElement('mutation'); mut.setAttribute(`customfield`, JSON.stringify(p.fieldOptions)); } if (mut) { shadow.appendChild(mut); } return value; } function buildArrayShadow(shadow: Element, blockType: string, fieldName?: string, fieldValues?: string[]) { const itemCount = fieldValues ? fieldValues.length : 2; const mut = document.createElement('mutation'); mut.setAttribute("items", "" + itemCount); mut.setAttribute("horizontalafter", "" + itemCount); shadow.appendChild(mut); for (let i = 0; i < itemCount; i++) { const innerValue = document.createElement("value"); innerValue.setAttribute("name", "ADD" + i); const innerShadow = document.createElement("shadow"); innerShadow.setAttribute("type", blockType); if (fieldName) { const field = document.createElement("field"); field.setAttribute("name", fieldName); if (fieldValues) { field.appendChild(document.createTextNode(fieldValues[i])); } innerShadow.appendChild(field); } innerValue.appendChild(innerShadow); shadow.appendChild(innerValue); } } export function createFlyoutHeadingLabel(name: string, color?: string, icon?: string, iconClass?: string) { const headingLabel = createFlyoutLabel(name, pxt.toolbox.convertColor(color), icon, iconClass); headingLabel.setAttribute('web-class', 'blocklyFlyoutHeading'); return headingLabel; } export function createFlyoutGroupLabel(name: string, icon?: string, labelLineWidth?: string, helpCallback?: string) { const groupLabel = createFlyoutLabel(name, undefined, icon); groupLabel.setAttribute('web-class', 'blocklyFlyoutGroup'); groupLabel.setAttribute('web-line', '1.5'); if (labelLineWidth) groupLabel.setAttribute('web-line-width', labelLineWidth); if (helpCallback) { groupLabel.setAttribute('web-help-button', 'true'); groupLabel.setAttribute('callbackKey', helpCallback); } return groupLabel; } function createFlyoutLabel(name: string, color?: string, icon?: string, iconClass?: string): HTMLElement { // Add the Heading label let headingLabel = Blockly.utils.xml.createElement('label') as HTMLElement; headingLabel.setAttribute('text', name); if (color) { headingLabel.setAttribute('web-icon-color', pxt.toolbox.convertColor(color)); } if (icon) { if (icon.length === 1) { headingLabel.setAttribute('web-icon', icon); if (iconClass) headingLabel.setAttribute('web-icon-class', iconClass); } else { headingLabel.setAttribute('web-icon-class', `blocklyFlyoutIcon${name}`); } } return headingLabel; } export function createFlyoutButton(callbackKey: string, label: string) { let button = Blockly.utils.xml.createElement('button') as Element; button.setAttribute('text', label); button.setAttribute('callbackKey', callbackKey); return button; } export function createToolboxBlock(info: pxtc.BlocksInfo, fn: pxtc.SymbolInfo, comp: pxt.blocks.BlockCompileInfo): HTMLElement { let parent: HTMLElement; let parentInput: HTMLElement; if (fn.attributes.toolboxParent) { const parentFn = info.blocksById[fn.attributes.toolboxParent]; if (parentFn) { parent = createToolboxBlock(info, parentFn, pxt.blocks.compileInfo(parentFn)); parentInput = fn.attributes.toolboxParentArgument ? parent.querySelector(`value[name=${fn.attributes.toolboxParentArgument}]`) : parent.querySelector(`value`); if (parentInput) { while (parentInput.firstChild) parentInput.removeChild(parentInput.firstChild); } else { parent = undefined; } } } // // toolbox update // let block = document.createElement(parent ? "shadow" : "block"); block.setAttribute("type", fn.attributes.blockId); if (fn.attributes.blockGap) block.setAttribute("gap", fn.attributes.blockGap); else if (pxt.appTarget.appTheme && pxt.appTarget.appTheme.defaultBlockGap) block.setAttribute("gap", pxt.appTarget.appTheme.defaultBlockGap.toString()); if (comp.thisParameter) { const t = comp.thisParameter; block.appendChild(createShadowValue(info, t, t.shadowBlockId || "variables_get", t.defaultValue || t.definitionName)); } if (fn.parameters) { comp.parameters.filter(pr => primitiveTypeRegex.test(pr.type) || primitiveTypeRegex.test(isArrayType(pr.type)) || pr.shadowBlockId || pr.defaultValue) .forEach(pr => { block.appendChild(createShadowValue(info, pr)); }) if (fn.attributes.draggableParameters) { comp.handlerArgs.forEach(arg => { // draggableParameters="variable": // <value name="HANDLER_DRAG_PARAM_arg"> // <shadow type="variables_get_reporter"> // <field name="VAR">defaultName</field> // </shadow> // </value> // draggableParameters="reporter" // <value name="HANDLER_DRAG_PARAM_arg"> // <shadow type="argument_reporter_custom"> // <mutation typename="Sprite"></mutation> // <field name="VALUE">mySprite</field> // </shadow> // </value> const useReporter = fn.attributes.draggableParameters === "reporter"; const value = document.createElement("value"); value.setAttribute("name", "HANDLER_DRAG_PARAM_" + arg.name); const blockType = useReporter ? pxt.blocks.reporterTypeForArgType(arg.type) : "variables_get_reporter"; const shadow = document.createElement("shadow"); shadow.setAttribute("type", blockType); if (useReporter && blockType === "argument_reporter_custom") { const mutation = document.createElement("mutation"); mutation.setAttribute("typename", arg.type); shadow.appendChild(mutation); } const field = document.createElement("field"); field.setAttribute("name", useReporter ? "VALUE" : "VAR"); field.textContent = Util.htmlEscape(arg.name); shadow.appendChild(field); value.appendChild(shadow); block.appendChild(value); }); } else { comp.handlerArgs.forEach(arg => { const field = document.createElement("field"); field.setAttribute("name", "HANDLER_" + arg.name); field.textContent = arg.name; block.appendChild(field); }); } } if (parent) { parentInput.appendChild(block); return parent; } return block; } export function injectBlocks(blockInfo: pxtc.BlocksInfo): pxtc.SymbolInfo[] { cachedBlockInfo = blockInfo; Blockly.pxtBlocklyUtils.whitelistDraggableBlockTypes(blockInfo.blocks.filter(fn => fn.attributes.duplicateShadowOnDrag).map(fn => fn.attributes.blockId)); // inject Blockly with all block definitions return blockInfo.blocks .map(fn => { const comp = compileInfo(fn); const block = createToolboxBlock(blockInfo, fn, comp); if (fn.attributes.blockBuiltin) { Util.assert(!!builtinBlocks()[fn.attributes.blockId]); const builtin = builtinBlocks()[fn.attributes.blockId]; builtin.symbol = fn; builtin.block.codeCard = mkCard(fn, block); } else { injectBlockDefinition(blockInfo, fn, comp, block); } return fn; }); } function injectBlockDefinition(info: pxtc.BlocksInfo, fn: pxtc.SymbolInfo, comp: pxt.blocks.BlockCompileInfo, blockXml: HTMLElement): boolean { let id = fn.attributes.blockId; if (builtinBlocks()[id]) { pxt.reportError("blocks", 'trying to override builtin block', { "details": id }); return false; } let hash = JSON.stringify(fn); if (cachedBlocks[id] && cachedBlocks[id].hash == hash) { return true; } if (Blockly.Blocks[fn.attributes.blockId]) { console.error("duplicate block definition: " + id); return false; } let cachedBlock: CachedBlock = { hash: hash, fn: fn, block: { codeCard: mkCard(fn, blockXml), init: function () { initBlock(this, info, fn, comp) } } } if (pxt.Util.isTranslationMode() && pxt.blocks.promptTranslateBlock) { cachedBlock.block.customContextMenu = (options: any[]) => { if (fn.attributes.translationId) { options.push({ enabled: true, text: lf("Translate this block"), callback: function () { pxt.blocks.promptTranslateBlock(id, [fn.attributes.translationId]); } }) } } } cachedBlocks[id] = cachedBlock; Blockly.Blocks[id] = cachedBlock.block; return true; } function newLabel(part: pxtc.BlockLabel | pxtc.BlockImage): Blockly.Field { if (part.kind === "image") { return iconToFieldImage(part.uri); } const txt = removeOuterSpace(part.text) if (!txt) { return undefined; } if (part.cssClass) { return new Blockly.FieldLabel(txt, part.cssClass); } else if (part.style.length) { return new pxtblockly.FieldStyledLabel(txt, { bold: part.style.indexOf("bold") !== -1, italics: part.style.indexOf("italics") !== -1, blocksInfo: undefined }) } else { return new Blockly.FieldLabel(txt, undefined); } } function cleanOuterHTML(el: HTMLElement): string { // remove IE11 junk return el.outerHTML.replace(/^<\?[^>]*>/, ''); } function mkCard(fn: pxtc.SymbolInfo, blockXml: HTMLElement): pxt.CodeCard { return { name: fn.namespace + '.' + fn.name, shortName: fn.name, description: fn.attributes.jsDoc, url: fn.attributes.help ? 'reference/' + fn.attributes.help.replace(/^\//, '') : undefined, blocksXml: `<xml xmlns="http://www.w3.org/1999/xhtml">${cleanOuterHTML(blockXml)}</xml>`, } } function isSubtype(apis: pxtc.ApisInfo, specific: string, general: string) { if (specific == general) return true let inf = apis.byQName[specific] if (inf && inf.extendsTypes) return inf.extendsTypes.indexOf(general) >= 0 return false } function initBlock(block: Blockly.Block, info: pxtc.BlocksInfo, fn: pxtc.SymbolInfo, comp: pxt.blocks.BlockCompileInfo) { const ns = (fn.attributes.blockNamespace || fn.namespace).split('.')[0]; const instance = fn.kind == pxtc.SymbolKind.Method || fn.kind == pxtc.SymbolKind.Property; const nsinfo = info.apis.byQName[ns]; const color = // blockNamespace overrides color on block (fn.attributes.blockNamespace && nsinfo && nsinfo.attributes.color) || fn.attributes.color || (nsinfo && nsinfo.attributes.color) || pxt.toolbox.getNamespaceColor(ns) || 255; const helpUrl = pxt.blocks.getHelpUrl(fn); if (helpUrl) block.setHelpUrl(helpUrl) block.setColour(color); let blockShape = Blockly.OUTPUT_SHAPE_ROUND; if (fn.retType == "boolean") blockShape = Blockly.OUTPUT_SHAPE_HEXAGONAL; block.setOutputShape(blockShape); if (fn.attributes.undeletable) block.setDeletable(false); buildBlockFromDef(fn.attributes._def); let hasHandler = false; if (fn.attributes.mutate) { addMutation(block as MutatingBlock, fn, fn.attributes.mutate); } else if (fn.attributes.defaultInstance) { addMutation(block as MutatingBlock, fn, MutatorTypes.DefaultInstanceMutator); } else if (fn.attributes._expandedDef && fn.attributes.expandableArgumentMode !== "disabled") { const shouldToggle = fn.attributes.expandableArgumentMode === "toggle"; initExpandableBlock(info, block, fn.attributes._expandedDef, comp, shouldToggle, () => buildBlockFromDef(fn.attributes._expandedDef, true)); } else if (comp.handlerArgs.length) { /** * We support four modes for handler parameters: variable dropdowns, * expandable variable dropdowns with +/- buttons (used for chat commands), * draggable variable blocks, and draggable reporter blocks. */ hasHandler = true; if (fn.attributes.optionalVariableArgs) { initVariableArgsBlock(block, comp.handlerArgs); } else if (fn.attributes.draggableParameters) { comp.handlerArgs.filter(a => !a.inBlockDef).forEach(arg => { const i = block.appendValueInput("HANDLER_DRAG_PARAM_" + arg.name); if (fn.attributes.draggableParameters == "reporter") { i.setCheck(getBlocklyCheckForType(arg.type, info)); } else { i.setCheck("Variable"); } }); } else { let i = block.appendDummyInput(); comp.handlerArgs.filter(a => !a.inBlockDef).forEach(arg => { i.appendField(new Blockly.FieldVariable(arg.name), "HANDLER_" + arg.name); }); } } // Add mutation to save and restore custom field settings appendMutation(block, { mutationToDom: (el: Element) => { block.inputList.forEach(input => { input.fieldRow.forEach((fieldRow: Blockly.FieldCustom) => { if (fieldRow.isFieldCustom_ && fieldRow.saveOptions) { const getOptions = fieldRow.saveOptions(); if (getOptions) { el.setAttribute(`customfield`, JSON.stringify(getOptions)); } } }) }) return el; }, domToMutation: (saved: Element) => { block.inputList.forEach(input => { input.fieldRow.forEach((fieldRow: Blockly.FieldCustom) => { if (fieldRow.isFieldCustom_ && fieldRow.restoreOptions) { const options = JSON.parse(saved.getAttribute(`customfield`)); if (options) { fieldRow.restoreOptions(options); } } }) }) } }); if (fn.attributes.imageLiteral) { const columns = (fn.attributes.imageLiteralColumns || 5) * fn.attributes.imageLiteral; const rows = fn.attributes.imageLiteralRows || 5; const scale = fn.attributes.imageLiteralScale; let ri = block.appendDummyInput(); ri.appendField(new pxtblockly.FieldMatrix("", { columns, rows, scale }), "LEDS"); } if (fn.attributes.inlineInputMode === "external") { block.setInputsInline(false); } else if (fn.attributes.inlineInputMode === "inline") { block.setInputsInline(true); } else { block.setInputsInline(!fn.parameters || (fn.parameters.length < 4 && !fn.attributes.imageLiteral)); } const body = fn.parameters?.find(pr => pxtc.parameterTypeIsArrowFunction(pr)); if (body || hasHandler) { block.appendStatementInput("HANDLER") .setCheck(null); block.setInputsInline(true); } setOutputCheck(block, fn.retType, info); // hook up/down if return value is void const hasHandlers = hasArrowFunction(fn); block.setPreviousStatement(!(hasHandlers && !fn.attributes.handlerStatement) && fn.retType == "void"); block.setNextStatement(!(hasHandlers && !fn.attributes.handlerStatement) && fn.retType == "void"); block.setTooltip(/^__/.test(fn.namespace) ? "" : fn.attributes.jsDoc); function buildBlockFromDef(def: pxtc.ParsedBlockDef, expanded = false) { let anonIndex = 0; let firstParam = !expanded && !!comp.thisParameter; const inputs = splitInputs(def); const imgConv = new ImageConverter() if (fn.attributes.shim === "ENUM_GET" || fn.attributes.shim === "KIND_GET") { if (comp.parameters.length > 1 || comp.thisParameter) { console.warn(`Enum blocks may only have 1 parameter but ${fn.attributes.blockId} has ${comp.parameters.length}`); return; } } const hasInput = (name: string) => block.inputList?.some(i => i.name === name); inputs.forEach(inputParts => { const fields: NamedField[] = []; let inputName: string; let inputCheck: string | string[]; let hasParameter = false; inputParts.forEach(part => { if (part.kind !== "param") { const f = newLabel(part); if (f) { fields.push({ field: f }); } } else if (fn.attributes.shim === "ENUM_GET") { U.assert(!!fn.attributes.enumName, "Trying to create an ENUM_GET block without a valid enum name") fields.push({ name: "MEMBER", field: new pxtblockly.FieldUserEnum(info.enumsByName[fn.attributes.enumName]) }); return; } else if (fn.attributes.shim === "KIND_GET") { fields.push({ name: "MEMBER", field: new pxtblockly.FieldKind(info.kindsByName[fn.attributes.kindNamespace || fn.attributes.blockNamespace || fn.namespace]) }); return; } else { // find argument let pr = getParameterFromDef(part, comp, firstParam); firstParam = false; if (!pr) { console.error("block " + fn.attributes.blockId + ": unknown parameter " + part.name + (part.ref ? ` (${part.ref})` : "")); return; } if (isHandlerArg(pr)) { inputName = "HANDLER_DRAG_PARAM_" + pr.name; inputCheck = fn.attributes.draggableParameters === "reporter" ? getBlocklyCheckForType(pr.type, info) : "Variable"; return; } let typeInfo = U.lookup(info.apis.byQName, pr.type) hasParameter = true; const defName = pr.definitionName; const actName = pr.actualName; let isEnum = typeInfo && typeInfo.kind == pxtc.SymbolKind.Enum let isFixed = typeInfo && !!typeInfo.attributes.fixedInstances && !pr.shadowBlockId; let isConstantShim = !!fn.attributes.constantShim; let isCombined = pr.type == "@combined@" let customField = pr.fieldEditor; let fieldLabel = defName.charAt(0).toUpperCase() + defName.slice(1); let fieldType = pr.type; if (isEnum || isFixed || isConstantShim || isCombined) { let syms: pxtc.SymbolInfo[]; if (isEnum) { syms = getEnumDropdownValues(info.apis, pr.type); } else if (isFixed) { syms = getFixedInstanceDropdownValues(info.apis, typeInfo.qName); } else if (isCombined) { syms = fn.combinedProperties.map(p => U.lookup(info.apis.byQName, p)) } else { syms = getConstantDropdownValues(info.apis, fn.qName); } if (syms.length == 0) { console.error(`no instances of ${typeInfo.qName} found`) } const dd = syms.map(v => { let k = v.attributes.block || v.attributes.blockId || v.name; let comb = v.attributes.blockCombine if (v.attributes.jresURL && !v.attributes.iconURL && U.startsWith(v.attributes.jresURL, "data:image/x-mkcd-f")) { v.attributes.iconURL = imgConv.convert(v.attributes.jresURL) } if (!!comb) k = k.replace(/@set/, "") return [ v.attributes.iconURL || v.attributes.blockImage ? { src: v.attributes.iconURL || Util.pathJoin(pxt.webConfig.commitCdnUrl, `blocks/${v.namespace.toLowerCase()}/${v.name.toLowerCase()}.png`), alt: k, width: 36, height: 36, value: v.name } : k, v.namespace + "." + v.name ]; }); // if a value is provided, move it first if (pr.defaultValue) { let shadowValueIndex = -1; dd.some((v, i) => { if (v[1] === (pr as BlockParameter).defaultValue) { shadowValueIndex = i; return true; } return false; }); if (shadowValueIndex > -1) { const shadowValue = dd.splice(shadowValueIndex, 1)[0]; dd.unshift(shadowValue); } } if (customField) { let defl = fn.attributes.paramDefl[actName] || ""; const options = { data: dd, colour: color, label: fieldLabel, type: fieldType, blocksInfo: info } as Blockly.FieldCustomDropdownOptions; Util.jsonMergeFrom(options, fn.attributes.paramFieldEditorOptions && fn.attributes.paramFieldEditorOptions[actName] || {}); fields.push(namedField(createFieldEditor(customField, defl, options), defName)); } else fields.push(namedField(new Blockly.FieldDropdown(dd), defName)); } else if (customField) { const defl = fn.attributes.paramDefl[pr.actualName] || ""; const options = { colour: color, label: fieldLabel, type: fieldType, blocksInfo: info } as Blockly.FieldCustomOptions; Util.jsonMergeFrom(options, fn.attributes.paramFieldEditorOptions && fn.attributes.paramFieldEditorOptions[pr.actualName] || {}); fields.push(namedField(createFieldEditor(customField, defl, options), pr.definitionName)); } else { inputName = defName; if (instance && part.name === "this") { inputCheck = pr.type; } else if (pr.type == "number" && pr.shadowBlockId && pr.shadowBlockId == "value") { inputName = undefined; fields.push(namedField(new Blockly.FieldNumber("0"), defName)); } else if (pr.type == "string" && pr.shadowOptions && pr.shadowOptions.toString) { inputCheck = null; } else { inputCheck = getBlocklyCheckForType(pr.type, info); } } } }); let input: Blockly.Input; if (inputName) { // Don't add duplicate inputs if (hasInput(inputName)) return; input = block.appendValueInput(inputName); input.setAlign(Blockly.ALIGN_LEFT); } else if (expanded) { const prefix = hasParameter ? optionalInputWithFieldPrefix : optionalDummyInputPrefix; inputName = prefix + (anonIndex++); // Don't add duplicate inputs if (hasInput(inputName)) return; input = block.appendDummyInput(inputName); } else { input = block.appendDummyInput(); } if (inputCheck) { input.setCheck(inputCheck); } fields.forEach(f => input.appendField(f.field, f.name)); }); imgConv.logTime() } } function getParameterFromDef(part: pxtc.BlockParameter, comp: BlockCompileInfo, isThis = false): HandlerArg | BlockParameter { if (part.ref) { const result = (part.name === "this") ? comp.thisParameter : comp.actualNameToParam[part.name]; if (!result) { let ha: HandlerArg; comp.handlerArgs.forEach(arg => { if (arg.name === part.name) ha = arg; }); if (ha) return ha; } return result; } else { return isThis ? comp.thisParameter : comp.definitionNameToParam[part.name]; } } function isHandlerArg(arg: HandlerArg | BlockParameter): arg is HandlerArg { return !(arg as BlockParameter).definitionName; } export function hasArrowFunction(fn: pxtc.SymbolInfo): boolean { return !!fn.parameters?.some(pr => pxtc.parameterTypeIsArrowFunction(pr)); } export function cleanBlocks() { pxt.debug('removing all custom blocks') for (const b in cachedBlocks) removeBlock(cachedBlocks[b].fn); } /** * Used by pxtrunner to initialize blocks in the docs */ export function initializeAndInject(blockInfo: pxtc.BlocksInfo) { init(); injectBlocks(blockInfo); } /** * Used by main app to initialize blockly blocks. * Blocks are injected separately by called injectBlocks */ export function initialize(blockInfo: pxtc.BlocksInfo) { init(); initJresIcons(blockInfo); } let blocklyInitialized = false; function init() { if (blocklyInitialized) return; blocklyInitialized = true; goog.provide('Blockly.Blocks.device'); goog.require('Blockly.Blocks'); Blockly.FieldCheckbox.CHECK_CHAR = '■'; (<any>Blockly).Constants.ADD_START_HATS = !!pxt.appTarget.appTheme.blockHats; initFieldEditors(); initContextMenu(); initOnStart(); initMath(); initVariables(); initFunctions(); initLists(); initLoops(); initLogic(); initText(); initDrag(); initDebugger(); initComments(); initTooltip(); // PXT is in charge of disabling, don't record undo for disabled events (Blockly.Block as any).prototype.setEnabled = function (enabled: any) { if (this.disabled == enabled) { let oldRecordUndo = (Blockly as any).Events.recordUndo; (Blockly as any).Events.recordUndo = false; Blockly.Events.fire(new Blockly.Events.BlockChange( this, 'disabled', null, this.disabled, !enabled)); (Blockly as any).Events.recordUndo = oldRecordUndo; this.disabled = !enabled; } }; } /** * Converts a TypeScript type into an array of type checks for Blockly inputs/outputs. Use * with block.setOutput() and input.setCheck(). * * @returns An array of checks if the type is valid, undefined if there are no valid checks * (e.g. type is void), and null if all checks should be accepted (e.g. type is generic) */ function getBlocklyCheckForType(type: string, info: pxtc.BlocksInfo) { const types = type.split(/\s*\|\s*/); const output = []; for (const subtype of types) { switch (subtype) { // Blockly capitalizes primitive types for its builtin math/string/logic blocks case "number": output.push("Number"); break; case "string": output.push("String"); break; case "boolean": output.push("Boolean"); break; case "T": // The type is generic, so accept any checks. This is mostly used with functions that // get values from arrays. This could be improved if we ever add proper type // inference for generic types case "any": return null; case "void": return undefined; default: // We add "Array" to the front for array types so that they can be connected // to the blocks that accept any array (e.g. length, push, pop, etc) if (isArrayType(subtype)) { if (types.length > 1) { // type inference will potentially break non-trivial arrays in intersections // until we have better type handling in blocks, // so escape and allow any block to be dropped in. return null; } else { output.push("Array"); } } // Blockly has no concept of inheritance, so we need to add all // super classes to the check array const si_r = info.apis.byQName[subtype]; if (si_r && si_r.extendsTypes && 0 < si_r.extendsTypes.length) { output.push(...si_r.extendsTypes); } else { output.push(subtype); } } } return output; } function setOutputCheck(block: Blockly.Block, retType: string, info: pxtc.BlocksInfo) { const check = getBlocklyCheckForType(retType, info); if (check || check === null) { block.setOutput(true, check); } } function setBuiltinHelpInfo(block: any, id: string) { const info = pxt.blocks.getBlockDefinition(id); setHelpResources(block, id, info.name, info.tooltip, info.url, pxt.toolbox.getNamespaceColor(info.category)); } function installBuiltinHelpInfo(id: string) { const info = pxt.blocks.getBlockDefinition(id); installHelpResources(id, info.name, info.tooltip, info.url, pxt.toolbox.getNamespaceColor(info.category)); } function setHelpResources(block: any, id: string, name: string, tooltip: any, url: string, colour: string, colourSecondary?: string, colourTertiary?: string, undeletable?: boolean) { if (tooltip && (typeof tooltip === "string" || typeof tooltip === "function")) block.setTooltip(tooltip); if (url) block.setHelpUrl(url); if (colour) block.setColour(colour, colourSecondary, colourTertiary); if (undeletable) block.setDeletable(false); let tb = document.getElementById('blocklyToolboxDefinition'); let xml: HTMLElement = tb ? getFirstChildWithAttr(tb, "block", "type", id) as HTMLElement : undefined; block.codeCard = <pxt.CodeCard>{ header: name, name: name, software: 1, description: goog.isFunction(tooltip) ? tooltip(block) : tooltip, blocksXml: xml ? (`<xml xmlns="http://www.w3.org/1999/xhtml">` + (cleanOuterHTML(xml) || `<block type="${id}"></block>`) + "</xml>") : undefined, url: url }; if (pxt.Util.isTranslationMode() && pxt.blocks.promptTranslateBlock) { block.customContextMenu = (options: any[]) => { const blockd = pxt.blocks.getBlockDefinition(block.type); if (blockd && blockd.translationIds) { options.push({ enabled: true, text: lf("Translate this block"), callback: function () { pxt.blocks.promptTranslateBlock(id, blockd.translationIds); } }) } }; } } export function installHelpResources(id: string, name: string, tooltip: any, url: string, colour: string, colourSecondary?: string, colourTertiary?: string) { let block = Blockly.Blocks[id]; let old = block.init; if (!old) return; block.init = function () { old.call(this); let block = this; setHelpResources(this, id, name, tooltip, url, colour, colourSecondary, colourTertiary); } } export let openHelpUrl: (url: string) => void; function initLists() { const msg = Blockly.Msg; // lists_create_with const listsCreateWithId = "lists_create_with"; const listsCreateWithDef = pxt.blocks.getBlockDefinition(listsCreateWithId); msg.LISTS_CREATE_EMPTY_TITLE = listsCreateWithDef.block["LISTS_CREATE_EMPTY_TITLE"]; msg.LISTS_CREATE_WITH_INPUT_WITH = listsCreateWithDef.block["LISTS_CREATE_WITH_INPUT_WITH"]; msg.LISTS_CREATE_WITH_CONTAINER_TITLE_ADD = listsCreateWithDef.block["LISTS_CREATE_WITH_CONTAINER_TITLE_ADD"]; msg.LISTS_CREATE_WITH_ITEM_TITLE = listsCreateWithDef.block["LISTS_CREATE_WITH_ITEM_TITLE"]; installBuiltinHelpInfo(listsCreateWithId); // lists_length const listsLengthId = "lists_length"; const listsLengthDef = pxt.blocks.getBlockDefinition(listsLengthId); msg.LISTS_LENGTH_TITLE = listsLengthDef.block["LISTS_LENGTH_TITLE"]; // We have to override this block definition because the builtin block // allows both Strings and Arrays in its input check and that confuses // our Blockly compiler let block = Blockly.Blocks[listsLengthId]; block.init = function () { this.jsonInit({ "message0": msg.LISTS_LENGTH_TITLE, "args0": [ { "type": "input_value", "name": "VALUE", "check": ['Array'] } ], "output": 'Number', "outputShape": Blockly.OUTPUT_SHAPE_ROUND }); } installBuiltinHelpInfo(listsLengthId); } function initLoops() { const msg = Blockly.Msg; // controls_repeat_ext const controlsRepeatExtId = "controls_repeat_ext"; const controlsRepeatExtDef = pxt.blocks.getBlockDefinition(controlsRepeatExtId); msg.CONTROLS_REPEAT_TITLE = controlsRepeatExtDef.block["CONTROLS_REPEAT_TITLE"]; msg.CONTROLS_REPEAT_INPUT_DO = controlsRepeatExtDef.block["CONTROLS_REPEAT_INPUT_DO"]; installBuiltinHelpInfo(controlsRepeatExtId); // device_while const deviceWhileId = "device_while"; const deviceWhileDef = pxt.blocks.getBlockDefinition(deviceWhileId); Blockly.Blocks[deviceWhileId] = { init: function () { this.jsonInit({ "message0": deviceWhileDef.block["message0"], "args0": [ { "type": "input_value", "name": "COND", "check": "Boolean" } ], "previousStatement": null, "nextStatement": null, "colour": pxt.toolbox.getNamespaceColor('loops') }); this.appendStatementInput("DO") .appendField(deviceWhileDef.block["appendField"]); setBuiltinHelpInfo(this, deviceWhileId); } }; // pxt_controls_for const pxtControlsForId = "pxt_controls_for"; const pxtControlsForDef = pxt.blocks.getBlockDefinition(pxtControlsForId); Blockly.Blocks[pxtControlsForId] = { /** * Block for 'for' loop. * @this Blockly.Block */ init: function () { this.jsonInit({ "message0": pxtControlsForDef.block["message0"], "args0": [ { "type": "input_value", "name": "VAR", "variable": pxtControlsForDef.block["variable"], "check": "Variable" }, { "type": "input_value", "name": "TO", "check": "Number" } ], "previousStatement": null, "nextStatement": null, "colour": pxt.toolbox.getNamespaceColor('loops'), "inputsInline": true }); this.appendStatementInput('DO') .appendField(pxtControlsForDef.block["appendField"]); let thisBlock = this; setHelpResources(this, pxtControlsForId, pxtControlsForDef.name, function () { return U.rlf(<string>pxtControlsForDef.tooltip, thisBlock.getInputTargetBlock('VAR') ? thisBlock.getInputTargetBlock('VAR').getField('VAR').getText() : ''); }, pxtControlsForDef.url, String(pxt.toolbox.getNamespaceColor('loops')) ); }, /** * Return all variables referenced by this block. * @return {!Array.<string>} List of variable names. * @this Blockly.Block */ getVars: function (): any[] { return [this.getField('VAR').getText()]; }, /** * Notification that a variable is renaming. * If the name matches one of this block's variables, rename it. * @param {string} oldName Previous name of variable. * @param {string} newName Renamed variable. * @this Blockly.Block */ renameVar: function (oldName: string, newName: string) { const varField = this.getField('VAR'); if (Blockly.Names.equals(oldName, varField.getText())) { varField.setValue(newName); } } }; // controls_simple_for const controlsSimpleForId = "controls_simple_for"; const controlsSimpleForDef = pxt.blocks.getBlockDefinition(controlsSimpleForId); Blockly.Blocks[controlsSimpleForId] = { /** * Block for 'for' loop. * @this Blockly.Block */ init: function () { this.jsonInit({ "message0": controlsSimpleForDef.block["message0"], "args0": [ { "type": "field_variable", "name": "VAR", "variable": controlsSimpleForDef.block["variable"] // Please note that most multilingual characters // cannot be used as variable name at this point. // Translate or decide the default variable name // with care. }, { "type": "input_value", "name": "TO", "check": "Number" } ], "previousStatement": null, "nextStatement": null, "colour": pxt.toolbox.getNamespaceColor('loops'), "inputsInline": true }); this.appendStatementInput('DO') .appendField(controlsSimpleForDef.block["appendField"]); let thisBlock = this; setHelpResources(this, controlsSimpleForId, controlsSimpleForDef.name, function () { return U.rlf(<string>controlsSimpleForDef.tooltip, thisBlock.getField('VAR').getText()); }, controlsSimpleForDef.url, String(pxt.toolbox.getNamespaceColor('loops')) ); }, /** * Return all variables referenced by this block. * @return {!Array.<string>} List of variable names. * @this Blockly.Block */ getVars: function (): any[] { return [this.getField('VAR').getText()]; }, /** * Notification that a variable is renaming. * If the name matches one of this block's variables, rename it. * @param {string} oldName Previous name of variable. * @param {string} newName Renamed variable. * @this Blockly.Block */ renameVar: function (oldName: string, newName: string) { const varField = this.getField('VAR'); if (Blockly.Names.equals(oldName, varField.getText())) { varField.setValue(newName); } }, /** * Add menu option to create getter block for loop variable. * @param {!Array} options List of menu options to add to. * @this Blockly.Block */ customContextMenu: function (options: any[]) { if (!this.isCollapsed() && !this.inDebugWorkspace()) { let option: any = { enabled: true }; let name = this.getField('VAR').getText(); option.text = lf("Create 'get {0}'", name); let xmlField = goog.dom.createDom('field', null, name); xmlField.setAttribute('name', 'VAR'); let xmlBlock = goog.dom.createDom('block', null, xmlField) as HTMLElement; xmlBlock.setAttribute('type', 'variables_get'); option.callback = Blockly.ContextMenu.callbackFactory(this, xmlBlock); options.push(option); } } }; // break statement const breakBlockDef = pxt.blocks.getBlockDefinition(ts.pxtc.TS_BREAK_TYPE); Blockly.Blocks[pxtc.TS_BREAK_TYPE] = { init: function () { const color = pxt.toolbox.getNamespaceColor('loops'); this.jsonInit({ "message0": breakBlockDef.block["message0"], "inputsInline": true, "previousStatement": null, "nextStatement": null, "colour": color }); setHelpResources(this, ts.pxtc.TS_BREAK_TYPE, breakBlockDef.name, breakBlockDef.tooltip, breakBlockDef.url, color, undefined/*colourSecondary*/, undefined/*colourTertiary*/, false/*undeletable*/ ); } } // continue statement const continueBlockDef = pxt.blocks.getBlockDefinition(ts.pxtc.TS_CONTINUE_TYPE); Blockly.Blocks[pxtc.TS_CONTINUE_TYPE] = { init: function () { const color = pxt.toolbox.getNamespaceColor('loops'); this.jsonInit({ "message0": continueBlockDef.block["message0"], "inputsInline": true, "previousStatement": null, "nextStatement": null, "colour": color }); setHelpResources(this, ts.pxtc.TS_CONTINUE_TYPE, continueBlockDef.name, continueBlockDef.tooltip, continueBlockDef.url, color, undefined/*colourSecondary*/, undefined/*colourTertiary*/, false/*undeletable*/ ); } } const collapsedColor = "#cccccc"; Blockly.Blocks[pxtc.COLLAPSED_BLOCK] = { init: function () { this.jsonInit({ "message0": "...", "inputsInline": true, "previousStatement": null, "nextStatement": null, "colour": collapsedColor }) setHelpResources(this, ts.pxtc.COLLAPSED_BLOCK, "...", lf("a few blocks"), undefined, collapsedColor, undefined/*colourSecondary*/, undefined/*colourTertiary*/, false/*undeletable*/ ); } } } export let onShowContextMenu: (workspace: Blockly.Workspace, items: Blockly.ContextMenu.Option[]) => void = undefined; /** * The following patch to blockly is to add the Trash icon on top of the toolbox, * the trash icon should only show when a user drags a block that is already in the workspace. */ function initDrag() { const calculateDistance = (elemBounds: any, mouseX: any) => { return Math.abs(mouseX - (elemBounds.left + (elemBounds.width / 2))); } /** * Execute a step of block dragging, based on the given event. Update the * display accordingly. * @param {!Event} e The most recent move event. * @param {!goog.math.Coordinate} currentDragDeltaXY How far the pointer has * moved from the position at the start of the drag, in pixel units. * @package */ const blockDrag = (<any>Blockly).BlockDragger.prototype.drag; (<any>Blockly).BlockDragger.prototype.drag = function (e: any, currentDragDeltaXY: any) { const blocklyToolboxDiv = document.getElementsByClassName('blocklyToolboxDiv')[0] as HTMLElement; const blocklyTreeRoot = document.getElementsByClassName('blocklyTreeRoot')[0] as HTMLElement || document.getElementsByClassName('blocklyFlyout')[0] as HTMLElement; const trashIcon = document.getElementById("blocklyTrashIcon"); if (blocklyTreeRoot && trashIcon) { const distance = calculateDistance(blocklyTreeRoot.getBoundingClientRect(), e.clientX); if (distance < 200) { const opacity = distance / 200; trashIcon.style.opacity = `${1 - opacity}`; trashIcon.style.display = 'block'; if (blocklyToolboxDiv) { blocklyTreeRoot.style.opacity = `${opacity}`; if (distance < 50) { pxt.BrowserUtils.addClass(blocklyToolboxDiv, 'blocklyToolboxDeleting'); } } } else { trashIcon.style.display = 'none'; blocklyTreeRoot.style.opacity = '1'; if (blocklyToolboxDiv) pxt.BrowserUtils.removeClass(blocklyToolboxDiv, 'blocklyToolboxDeleting'); } } return blockDrag.call(this, e, currentDragDeltaXY); }; /** * Finish dragging the workspace and put everything back where it belongs. * @param {!goog.math.Coordinate} currentDragDeltaXY How far the pointer has * moved from the position at the start of the drag, in pixel coordinates. * @package */ const blockEndDrag = (<any>Blockly).BlockDragger.prototype.endDrag; (<any>Blockly).BlockDragger.prototype.endDrag = function (e: any, currentDragDeltaXY: any) { blockEndDrag.call(this, e, currentDragDeltaXY); const blocklyToolboxDiv = document.getElementsByClassName('blocklyToolboxDiv')[0] as HTMLElement; const blocklyTreeRoot = document.getElementsByClassName('blocklyTreeRoot')[0] as HTMLElement || document.getElementsByClassName('blocklyFlyout')[0] as HTMLElement; const trashIcon = document.getElementById("blocklyTrashIcon"); if (trashIcon && blocklyTreeRoot) { trashIcon.style.display = 'none'; blocklyTreeRoot.style.opacity = '1'; if (blocklyToolboxDiv) pxt.BrowserUtils.removeClass(blocklyToolboxDiv, 'blocklyToolboxDeleting'); } } } function initContextMenu() { // Translate the context menu for blocks. const msg = Blockly.Msg; msg.DUPLICATE_BLOCK = lf("{id:block}Duplicate"); msg.DUPLICATE_COMMENT = lf("Duplicate Comment"); msg.REMOVE_COMMENT = lf("Remove Comment"); msg.ADD_COMMENT = lf("Add Comment"); msg.EXTERNAL_INPUTS = lf("External Inputs"); msg.INLINE_INPUTS = lf("Inline Inputs"); msg.EXPAND_BLOCK = lf("Expand Block"); msg.COLLAPSE_BLOCK = lf("Collapse Block"); msg.ENABLE_BLOCK = lf("Enable Block"); msg.DISABLE_BLOCK = lf("Disable Block"); msg.DELETE_BLOCK = lf("Delete Block"); msg.DELETE_X_BLOCKS = lf("Delete Blocks"); msg.DELETE_ALL_BLOCKS = lf("Delete All Blocks"); msg.HELP = lf("Help"); // inject hook to handle openings docs (<any>Blockly).BlockSvg.prototype.showHelp = function () { const url = goog.isFunction(this.helpUrl) ? this.helpUrl() : this.helpUrl; if (url) (pxt.blocks.openHelpUrl || window.open)(url); }; // Use Blockly hook to customize context menu (<any>Blockly).WorkspaceSvg.prototype.configureContextMenu = function (options: Blockly.ContextMenu.Option[], e: any) { if (this.options.readOnly || this.isFlyout) { return; } // Clear default Blockly options options.length = 0; let topBlocks = this.getTopBlocks(true); let eventGroup = Blockly.utils.genUid(); let topComments = this.getTopComments(); let ws = this; const editable = !(this.options.debugMode || this.options.readOnly); // Option to add a workspace comment. if (this.options.comments && !BrowserUtils.isIE()) { const commentOption = Blockly.ContextMenu.workspaceCommentOption(ws, e) as any; commentOption.enabled = commentOption.enabled && editable; options.push(commentOption); } // Option to delete all blocks. // Count the number of blocks that are deletable. let deleteList = (Blockly.WorkspaceSvg as any).buildDeleteList_(topBlocks); let deleteCount = 0; for (let i = 0; i < deleteList.length; i++) { if (!deleteList[i].isShadow()) { deleteCount++; } } // Add a little animation to deleting. const DELAY = 10; function deleteNext() { (<any>Blockly).Events.setGroup(eventGroup); let block = deleteList.shift(); if (block) { if (block.workspace) { block.dispose(false, true); setTimeout(deleteNext, DELAY); } else { deleteNext(); } } Blockly.Events.setGroup(false); } const deleteOption = { text: deleteCount == 1 ? msg.DELETE_BLOCK : msg.DELETE_ALL_BLOCKS, enabled: deleteCount > 0 && editable, callback: () => { pxt.tickEvent("blocks.context.delete", undefined, { interactiveConsent: true }); if (deleteCount < 2) { deleteNext(); } else { Blockly.confirm(lf("Delete all {0} blocks?", deleteCount), (ok) => { if (ok) { deleteNext(); } }); } } } options.push(deleteOption); const formatCodeOption = { text: lf("Format Code"), enabled: editable, callback: () => { pxt.tickEvent("blocks.context.format", undefined, { interactiveConsent: true }); pxt.blocks.layout.flow(this, { useViewWidth: true }); } } options.push(formatCodeOption); if (pxt.appTarget.appTheme.blocksCollapsing) { // Option to collapse all top-level (enabled) blocks const collapseAllOption = { text: lf("Collapse Blocks"), enabled: topBlocks.length && topBlocks.find((b: Blockly.Block) => b.isEnabled() && !b.isCollapsed()) && editable, callback: () => { pxt.tickEvent("blocks.context.collapse", undefined, { interactiveConsent: true }); pxt.blocks.layout.setCollapsedAll(this, true); } } options.push(collapseAllOption); // Option to expand all collapsed blocks const expandAllOption = { text: lf("Expand Blocks"), enabled: topBlocks.length && topBlocks.find((b: Blockly.Block) => b.isEnabled() && b.isCollapsed()) && editable, callback: () => { pxt.tickEvent("blocks.context.expand", undefined, { interactiveConsent: true }); pxt.blocks.layout.setCollapsedAll(this, false); } } options.push(expandAllOption); } if (pxt.blocks.layout.screenshotEnabled()) { const screenshotOption = { text: lf("Snapshot"), enabled: topBlocks.length > 0 || topComments.length > 0, callback: () => { pxt.tickEvent("blocks.context.screenshot", undefined, { interactiveConsent: true }); pxt.blocks.layout.screenshotAsync(this, null, pxt.appTarget.appTheme?.embedBlocksInSnapshot) .then((uri) => { if (pxt.BrowserUtils.isSafari()) uri = uri.replace(/^data:image\/[^;]/, 'data:application/octet-stream'); BrowserUtils.browserDownloadDataUri( uri, `${pxt.appTarget.nickname || pxt.appTarget.id}-${lf("screenshot")}.png`); }); }, } options.push(screenshotOption); } if (pxt.appTarget.appTheme.workspaceSearch) { options.push({ text: lf("Find..."), enabled: topBlocks.length > 0, callback: () => { pxt.tickEvent("blocks.context.workspacesearch", undefined, { interactiveConsent: true }); this.getComponentManager()?.getComponent("workspaceSearch")?.open(); } }); } // custom options... if (onShowContextMenu) onShowContextMenu(this, options); }; // Get rid of bumping behavior (Blockly as any).Constants.Logic.LOGIC_COMPARE_ONCHANGE_MIXIN.onchange = function () { } } function initOnStart() { // on_start const onStartDef = pxt.blocks.getBlockDefinition(ts.pxtc.ON_START_TYPE); Blockly.Blocks[ts.pxtc.ON_START_TYPE] = { init: function () { this.jsonInit({ "message0": onStartDef.block["message0"], "args0": [ { "type": "input_dummy" }, { "type": "input_statement", "name": "HANDLER" } ], "colour": (pxt.appTarget.runtime ? pxt.appTarget.runtime.onStartColor : '') || pxt.toolbox.getNamespaceColor('loops') }); setHelpResources(this, ts.pxtc.ON_START_TYPE, onStartDef.name, onStartDef.tooltip, onStartDef.url, String((pxt.appTarget.runtime ? pxt.appTarget.runtime.onStartColor : '') || pxt.toolbox.getNamespaceColor('loops')), undefined, undefined, pxt.appTarget.runtime ? pxt.appTarget.runtime.onStartUnDeletable : false ); } }; Blockly.Blocks[pxtc.TS_STATEMENT_TYPE] = { init: function () { let that: GrayBlockStatement = this; that.setColour("#717171") that.setPreviousStatement(true); that.setNextStatement(true); that.setInputsInline(false); let pythonMode: boolean; let lines: string[]; that.domToMutation = (element: Element) => { const n = parseInt(element.getAttribute("numlines")); that.declaredVariables = element.getAttribute("declaredvars"); lines = []; for (let i = 0; i < n; i++) { const line = element.getAttribute("line" + i); lines.push(line); } // Add the initial TS inputs that.setPythonEnabled(false); }; that.mutationToDom = () => { let mutation = document.createElement("mutation"); if (lines) { lines.forEach((line, index) => mutation.setAttribute("line" + index, line)); mutation.setAttribute("numlines", lines.length.toString()); } if (that.declaredVariables) { mutation.setAttribute("declaredvars", this.declaredVariables); } return mutation; }; // Consumed by the webapp that.setPythonEnabled = (enabled: boolean) => { if (pythonMode === enabled) return; // Remove all inputs while (that.inputList.length) { that.removeInput(that.inputList[0].name); } pythonMode = enabled; if (enabled) { // This field must be named LINE0 because otherwise Blockly will crash // when trying to make an insertion marker. All insertion marker blocks // need to have the same fields as the real block, and this field will // always be created by domToMutation regardless of TS or Python mode that.appendDummyInput().appendField(Util.lf("<python code>"), "LINE0") that.setTooltip(lf("A Python statement that could not be converted to blocks")); } else { lines.forEach((line, index) => { that.appendDummyInput().appendField(line, "LINE" + index); }); that.setTooltip(lf("A JavaScript statement that could not be converted to blocks")); } } // Consumed by BlocklyCompiler that.getLines = () => lines; that.setEditable(false); setHelpResources(this, pxtc.TS_STATEMENT_TYPE, lf("JavaScript statement"), lf("A JavaScript statement that could not be converted to blocks"), '/blocks/javascript-blocks', '#717171' ); } }; Blockly.Blocks[pxtc.TS_OUTPUT_TYPE] = { init: function () { let that: GrayBlock = this; that.setColour("#717171") that.setPreviousStatement(false); that.setNextStatement(false); that.setOutput(true); that.setEditable(false); that.appendDummyInput().appendField(new pxtblockly.FieldTsExpression(""), "EXPRESSION"); that.setPythonEnabled = (enabled: boolean) => { (that.getField("EXPRESSION") as pxtblockly.FieldTsExpression).setPythonEnabled(enabled); if (enabled) { that.setTooltip(lf("A Python expression that could not be converted to blocks")); } else { that.setTooltip(lf("A JavaScript expression that could not be converted to blocks")); } } setHelpResources(that, pxtc.TS_OUTPUT_TYPE, lf("JavaScript expression"), lf("A JavaScript expression that could not be converted to blocks"), '/blocks/javascript-blocks', "#717171" ); } }; if (pxt.appTarget.runtime && pxt.appTarget.runtime.pauseUntilBlock) { const blockOptions = pxt.appTarget.runtime.pauseUntilBlock; const blockDef = pxt.blocks.getBlockDefinition(ts.pxtc.PAUSE_UNTIL_TYPE); Blockly.Blocks[pxtc.PAUSE_UNTIL_TYPE] = { init: function () { const color = blockOptions.color || pxt.toolbox.getNamespaceColor('loops'); this.jsonInit({ "message0": blockDef.block["message0"], "args0": [ { "type": "input_value", "name": "PREDICATE", "check": "Boolean" } ], "inputsInline": true, "previousStatement": null, "nextStatement": null, "colour": color }); setHelpResources(this, ts.pxtc.PAUSE_UNTIL_TYPE, blockDef.name, blockDef.tooltip, blockDef.url, color, undefined/*colourSecondary*/, undefined/*colourTertiary*/, false/*undeletable*/ ); } } } // pxt_controls_for_of const pxtControlsForOfId = "pxt_controls_for_of"; const pxtControlsForOfDef = pxt.blocks.getBlockDefinition(pxtControlsForOfId); Blockly.Blocks[pxtControlsForOfId] = { init: function () { this.jsonInit({ "message0": pxtControlsForOfDef.block["message0"], "args0": [ { "type": "input_value", "name": "VAR", "variable": pxtControlsForOfDef.block["variable"], "check": "Variable" }, { "type": "input_value", "name": "LIST", "check": ["Array", "String"] } ], "previousStatement": null, "nextStatement": null, "colour": pxt.toolbox.blockColors['loops'], "inputsInline": true }); this.appendStatementInput('DO') .appendField(pxtControlsForOfDef.block["appendField"]); let thisBlock = this; setHelpResources(this, pxtControlsForOfId, pxtControlsForOfDef.name, function () { return U.rlf(<string>pxtControlsForOfDef.tooltip, thisBlock.getInputTargetBlock('VAR') ? thisBlock.getInputTargetBlock('VAR').getField('VAR').getText() : ''); }, pxtControlsForOfDef.url, String(pxt.toolbox.getNamespaceColor('loops')) ); } }; // controls_for_of const controlsForOfId = "controls_for_of"; const controlsForOfDef = pxt.blocks.getBlockDefinition(controlsForOfId); Blockly.Blocks[controlsForOfId] = { init: function () { this.jsonInit({ "message0": controlsForOfDef.block["message0"], "args0": [ { "type": "field_variable", "name": "VAR", "variable": controlsForOfDef.block["variable"] // Please note that most multilingual characters // cannot be used as variable name at this point. // Translate or decide the default variable name // with care. }, { "type": "input_value", "name": "LIST", "check": "Array" } ], "previousStatement": null, "nextStatement": null, "colour": pxt.toolbox.blockColors['loops'], "inputsInline": true }); this.appendStatementInput('DO') .appendField(controlsForOfDef.block["appendField"]); let thisBlock = this; setHelpResources(this, controlsForOfId, controlsForOfDef.name, function () { return U.rlf(<string>controlsForOfDef.tooltip, thisBlock.getField('VAR').getText()); }, controlsForOfDef.url, String(pxt.toolbox.getNamespaceColor('loops')) ); } }; // lists_index_get const listsIndexGetId = "lists_index_get"; const listsIndexGetDef = pxt.blocks.getBlockDefinition(listsIndexGetId); Blockly.Blocks["lists_index_get"] = { init: function () { this.jsonInit({ "message0": listsIndexGetDef.block["message0"], "args0": [ { "type": "input_value", "name": "LIST", "check": "Array" }, { "type": "input_value", "name": "INDEX", "check": "Number" } ], "colour": pxt.toolbox.blockColors['arrays'], "outputShape": Blockly.OUTPUT_SHAPE_ROUND, "inputsInline": true }); this.setPreviousStatement(false); this.setNextStatement(false); this.setOutput(true); setBuiltinHelpInfo(this, listsIndexGetId); } }; // lists_index_set const listsIndexSetId = "lists_index_set"; const listsIndexSetDef = pxt.blocks.getBlockDefinition(listsIndexSetId); Blockly.Blocks[listsIndexSetId] = { init: function () { this.jsonInit({ "message0": listsIndexSetDef.block["message0"], "args0": [ { "type": "input_value", "name": "LIST", "check": "Array" }, { "type": "input_value", "name": "INDEX", "check": "Number" }, { "type": "input_value", "name": "VALUE", "check": null } ], "previousStatement": null, "nextStatement": null, "colour": pxt.toolbox.blockColors['arrays'], "inputsInline": true }); setBuiltinHelpInfo(this, listsIndexSetId); } }; } function initMath() { // math_op2 const mathOp2Id = "math_op2"; const mathOp2Def = pxt.blocks.getBlockDefinition(mathOp2Id); const mathOp2Tooltips = <Map<string>>mathOp2Def.tooltip; Blockly.Blocks[mathOp2Id] = { init: function () { this.jsonInit({ "message0": lf("%1 of %2 and %3"), "args0": [ { "type": "field_dropdown", "name": "op", "options": [ [lf("{id:op}min"), "min"], [lf("{id:op}max"), "max"] ] }, { "type": "input_value", "name": "x", "check": "Number" }, { "type": "input_value", "name": "y", "check": "Number" } ], "inputsInline": true, "output": "Number", "outputShape": Blockly.OUTPUT_SHAPE_ROUND, "colour": pxt.toolbox.getNamespaceColor('math') }); let thisBlock = this; setHelpResources(this, mathOp2Id, mathOp2Def.name, function (block: any) { return mathOp2Tooltips[block.getFieldValue('op')]; }, mathOp2Def.url, pxt.toolbox.getNamespaceColor(mathOp2Def.category) ); } }; // math_op3 const mathOp3Id = "math_op3"; const mathOp3Def = pxt.blocks.getBlockDefinition(mathOp3Id); Blockly.Blocks[mathOp3Id] = { init: function () { this.jsonInit({ "message0": mathOp3Def.block["message0"], "args0": [ { "type": "input_value", "name": "x", "check": "Number" } ], "inputsInline": true, "output": "Number", "outputShape": Blockly.OUTPUT_SHAPE_ROUND, "colour": pxt.toolbox.getNamespaceColor('math') }); setBuiltinHelpInfo(this, mathOp3Id); } }; // builtin math_number, math_integer, math_whole_number, math_number_minmax //XXX Integer validation needed. const numberBlocks = ['math_number', 'math_integer', 'math_whole_number', 'math_number_minmax'] numberBlocks.forEach(num_id => { const mInfo = pxt.blocks.getBlockDefinition(num_id); installHelpResources( num_id, mInfo.name, mInfo.tooltip, mInfo.url, (Blockly as any).Colours.textField, (Blockly as any).Colours.textField, (Blockly as any).Colours.textField ); }) // builtin math_arithmetic const msg = Blockly.Msg; const mathArithmeticId = "math_arithmetic"; const mathArithmeticDef = pxt.blocks.getBlockDefinition(mathArithmeticId); const mathArithmeticTooltips = <Map<string>>mathArithmeticDef.tooltip; msg.MATH_ADDITION_SYMBOL = mathArithmeticDef.block["MATH_ADDITION_SYMBOL"]; msg.MATH_SUBTRACTION_SYMBOL = mathArithmeticDef.block["MATH_SUBTRACTION_SYMBOL"]; msg.MATH_MULTIPLICATION_SYMBOL = mathArithmeticDef.block["MATH_MULTIPLICATION_SYMBOL"]; msg.MATH_DIVISION_SYMBOL = mathArithmeticDef.block["MATH_DIVISION_SYMBOL"]; msg.MATH_POWER_SYMBOL = mathArithmeticDef.block["MATH_POWER_SYMBOL"]; installHelpResources( mathArithmeticId, mathArithmeticDef.name, function (block: any) { return mathArithmeticTooltips[block.getFieldValue('OP')]; }, mathArithmeticDef.url, pxt.toolbox.getNamespaceColor(mathArithmeticDef.category) ); // builtin math_modulo const mathModuloId = "math_modulo"; const mathModuloDef = pxt.blocks.getBlockDefinition(mathModuloId); msg.MATH_MODULO_TITLE = mathModuloDef.block["MATH_MODULO_TITLE"]; installBuiltinHelpInfo(mathModuloId); initMathOpBlock(); initMathRoundBlock(); } function initVariables() { // We only give types to "special" variables like enum members and we don't // want those showing up in the variable dropdown so filter the variables // that show up to only ones that have an empty type (Blockly.FieldVariable.prototype as any).getVariableTypes_ = () => [""]; let varname = lf("{id:var}item"); Blockly.Variables.flyoutCategory = function (workspace: Blockly.WorkspaceSvg) { let xmlList: HTMLElement[] = []; if (!pxt.appTarget.appTheme.hideFlyoutHeadings) { // Add the Heading label let headingLabel = createFlyoutHeadingLabel(lf("Variables"), pxt.toolbox.getNamespaceColor('variables'), pxt.toolbox.getNamespaceIcon('variables')); xmlList.push(headingLabel); } let button = document.createElement('button') as HTMLElement; button.setAttribute('text', lf("Make a Variable...")); button.setAttribute('callbackKey', 'CREATE_VARIABLE'); workspace.registerButtonCallback('CREATE_VARIABLE', function (button) { Blockly.Variables.createVariable(button.getTargetWorkspace()); }); xmlList.push(button); let blockList = Blockly.Variables.flyoutCategoryBlocks(workspace) as HTMLElement[]; xmlList = xmlList.concat(blockList); return xmlList; }; Blockly.Variables.flyoutCategoryBlocks = function (workspace) { let variableModelList = workspace.getVariablesOfType(''); let xmlList: HTMLElement[] = []; if (variableModelList.length > 0) { let mostRecentVariable = variableModelList[variableModelList.length - 1]; variableModelList.sort(Blockly.VariableModel.compareByName); // variables getters first for (let i = 0; i < variableModelList.length; i++) { const variable = variableModelList[i]; if (Blockly.Blocks['variables_get']) { let blockText = '<xml>' + '<block type="variables_get" gap="8">' + Blockly.Variables.generateVariableFieldXmlString(variable) + '</block>' + '</xml>'; let block = Blockly.Xml.textToDom(blockText).firstChild as HTMLElement; xmlList.push(block); } } xmlList[xmlList.length - 1].setAttribute('gap', '24'); if (Blockly.Blocks['variables_change'] || Blockly.Blocks['variables_set']) { xmlList.unshift(createFlyoutGroupLabel("Your Variables")); } if (Blockly.Blocks['variables_change']) { let gap = Blockly.Blocks['variables_get'] ? 20 : 8; let blockText = '<xml>' + '<block type="variables_change" gap="' + gap + '">' + Blockly.Variables.generateVariableFieldXmlString(mostRecentVariable) + '</block>' + '</xml>'; let block = Blockly.Xml.textToDom(blockText).firstChild as HTMLElement; { let value = goog.dom.createDom('value'); value.setAttribute('name', 'VALUE'); let shadow = goog.dom.createDom('shadow'); shadow.setAttribute("type", "math_number"); value.appendChild(shadow); let field = goog.dom.createDom('field'); field.setAttribute('name', 'NUM'); field.appendChild(document.createTextNode("1")); shadow.appendChild(field); block.appendChild(value); } xmlList.unshift(block); } if (Blockly.Blocks['variables_set']) { let gap = Blockly.Blocks['variables_change'] ? 8 : 24; let blockText = '<xml>' + '<block type="variables_set" gap="' + gap + '">' + Blockly.Variables.generateVariableFieldXmlString(mostRecentVariable) + '</block>' + '</xml>'; let block = Blockly.Xml.textToDom(blockText).firstChild as HTMLElement; { let value = goog.dom.createDom('value'); value.setAttribute('name', 'VALUE'); let shadow = goog.dom.createDom('shadow'); shadow.setAttribute("type", "math_number"); value.appendChild(shadow); let field = goog.dom.createDom('field'); field.setAttribute('name', 'NUM'); field.appendChild(document.createTextNode("0")); shadow.appendChild(field); block.appendChild(value); } xmlList.unshift(block); } } return xmlList; }; // builtin variables_get const msg = Blockly.Msg; const variablesGetId = "variables_get"; const variablesGetDef = pxt.blocks.getBlockDefinition(variablesGetId); msg.VARIABLES_GET_CREATE_SET = variablesGetDef.block["VARIABLES_GET_CREATE_SET"]; installBuiltinHelpInfo(variablesGetId); const variablesReporterGetId = "variables_get_reporter"; installBuiltinHelpInfo(variablesReporterGetId); // Dropdown menu of variables_get msg.RENAME_VARIABLE = lf("Rename variable..."); msg.DELETE_VARIABLE = lf("Delete the \"%1\" variable"); msg.DELETE_VARIABLE_CONFIRMATION = lf("Delete %1 uses of the \"%2\" variable?"); msg.NEW_VARIABLE_DROPDOWN = lf("New variable..."); // builtin variables_set const variablesSetId = "variables_set"; const variablesSetDef = pxt.blocks.getBlockDefinition(variablesSetId); msg.VARIABLES_SET = variablesSetDef.block["VARIABLES_SET"]; msg.VARIABLES_DEFAULT_NAME = varname; msg.VARIABLES_SET_CREATE_GET = lf("Create 'get %1'"); installBuiltinHelpInfo(variablesSetId); // pxt variables_change const variablesChangeId = "variables_change"; const variablesChangeDef = pxt.blocks.getBlockDefinition(variablesChangeId); Blockly.Blocks[variablesChangeId] = { init: function () { this.jsonInit({ "message0": variablesChangeDef.block["message0"], "args0": [ { "type": "field_variable", "name": "VAR", "variable": varname }, { "type": "input_value", "name": "VALUE", "check": "Number" } ], "inputsInline": true, "previousStatement": null, "nextStatement": null, "colour": pxt.toolbox.getNamespaceColor('variables') }); setBuiltinHelpInfo(this, variablesChangeId); }, /** * Add menu option to create getter block for this variable * @param {!Array} options List of menu options to add to. * @this Blockly.Block */ customContextMenu: function (options: any[]) { if (!(this.inDebugWorkspace())) { let option: any = { enabled: this.workspace.remainingCapacity() > 0 }; let name = this.getField("VAR").getText(); option.text = lf("Create 'get {0}'", name) let xmlField = goog.dom.createDom('field', null, name); xmlField.setAttribute('name', 'VAR'); let xmlBlock = goog.dom.createDom('block', null, xmlField); xmlBlock.setAttribute('type', "variables_get"); option.callback = Blockly.ContextMenu.callbackFactory(this, xmlBlock); options.push(option); } } }; // New variable dialog msg.NEW_VARIABLE_TITLE = lf("New variable name:"); // Rename variable dialog msg.RENAME_VARIABLE_TITLE = lf("Rename all '%1' variables to:"); } function initFunctions() { const msg = Blockly.Msg; // New functions implementation messages msg.FUNCTION_CREATE_NEW = lf("Make a Function..."); msg.FUNCTION_WARNING_DUPLICATE_ARG = lf("Functions cannot use the same argument name more than once."); msg.FUNCTION_WARNING_ARG_NAME_IS_FUNCTION_NAME = lf("Argument names must not be the same as the function name."); msg.FUNCTION_WARNING_EMPTY_NAME = lf("Function and argument names cannot be empty."); msg.FUNCTIONS_DEFAULT_FUNCTION_NAME = lf("doSomething"); msg.FUNCTIONS_DEFAULT_BOOLEAN_ARG_NAME = lf("bool"); msg.FUNCTIONS_DEFAULT_STRING_ARG_NAME = lf("text"); msg.FUNCTIONS_DEFAULT_NUMBER_ARG_NAME = lf("num"); msg.FUNCTIONS_DEFAULT_CUSTOM_ARG_NAME = lf("arg"); msg.PROCEDURES_HUE = pxt.toolbox.getNamespaceColor("functions"); msg.REPORTERS_HUE = pxt.toolbox.getNamespaceColor("variables"); // builtin procedures_defnoreturn const proceduresDefId = "procedures_defnoreturn"; const proceduresDef = pxt.blocks.getBlockDefinition(proceduresDefId); msg.PROCEDURES_DEFNORETURN_TITLE = proceduresDef.block["PROCEDURES_DEFNORETURN_TITLE"]; (msg as any).PROCEDURE_ALREADY_EXISTS = proceduresDef.block["PROCEDURE_ALREADY_EXISTS"]; (Blockly.Blocks['procedures_defnoreturn']).init = function () { let nameField = new Blockly.FieldTextInput('', (Blockly as any).Procedures.rename); //nameField.setSpellcheck(false); //TODO this.appendDummyInput() .appendField((Blockly as any).Msg.PROCEDURES_DEFNORETURN_TITLE) .appendField(nameField, 'NAME') .appendField('', 'PARAMS'); this.setColour(pxt.toolbox.getNamespaceColor('functions')); this.arguments_ = []; this.argumentVarModels_ = []; this.setStartHat(true); this.setStatements_(true); this.statementConnection_ = null; }; installBuiltinHelpInfo(proceduresDefId); // builtin procedures_defnoreturn const proceduresCallId = "procedures_callnoreturn"; const proceduresCallDef = pxt.blocks.getBlockDefinition(proceduresCallId); msg.PROCEDURES_CALLRETURN_TOOLTIP = proceduresDef.tooltip.toString(); Blockly.Blocks['procedures_callnoreturn'] = { init: function () { let nameField = new pxtblockly.FieldProcedure(''); this.appendDummyInput('TOPROW') .appendField(proceduresCallDef.block['PROCEDURES_CALLNORETURN_TITLE']) .appendField(nameField, 'NAME'); this.setPreviousStatement(true); this.setNextStatement(true); this.setColour(pxt.toolbox.getNamespaceColor('functions')); this.arguments_ = []; this.quarkConnections_ = {}; this.quarkIds_ = null; }, /** * Returns the name of the procedure this block calls. * @return {string} Procedure name. * @this Blockly.Block */ getProcedureCall: function () { // The NAME field is guaranteed to exist, null will never be returned. return /** @type {string} */ (this.getFieldValue('NAME')); }, /** * Notification that a procedure is renaming. * If the name matches this block's procedure, rename it. * @param {string} oldName Previous name of procedure. * @param {string} newName Renamed procedure. * @this Blockly.Block */ renameProcedure: function (oldName: string, newName: string) { if (Blockly.Names.equals(oldName, this.getProcedureCall())) { this.setFieldValue(newName, 'NAME'); } }, /** * Procedure calls cannot exist without the corresponding procedure * definition. Enforce this link whenever an event is fired. * @param {!Blockly.Events.Abstract} event Change event. * @this Blockly.Block */ onchange: function (event: any) { if (!this.workspace || this.workspace.isFlyout || this.isInsertionMarker()) { // Block is deleted or is in a flyout or insertion marker. return; } if (event.type == Blockly.Events.CREATE && event.ids.indexOf(this.id) != -1) { // Look for the case where a procedure call was created (usually through // paste) and there is no matching definition. In this case, create // an empty definition block with the correct signature. let name = this.getProcedureCall(); let def = Blockly.Procedures.getDefinition(name, this.workspace); if (def && (def.type != this.defType_ || JSON.stringify((def as any).arguments_) != JSON.stringify(this.arguments_))) { // The signatures don't match. def = null; } if (!def) { Blockly.Events.setGroup(event.group); /** * Create matching definition block. * <xml> * <block type="procedures_defreturn" x="10" y="20"> * <field name="NAME">test</field> * </block> * </xml> */ let xml = Blockly.utils.xml.createElement('xml'); let block = Blockly.utils.xml.createElement('block'); block.setAttribute('type', this.defType_); let xy = this.getRelativeToSurfaceXY(); let x = xy.x + (Blockly as any).SNAP_RADIUS * (this.RTL ? -1 : 1); let y = xy.y + (Blockly as any).SNAP_RADIUS * 2; block.setAttribute('x', x); block.setAttribute('y', y); let field = Blockly.utils.xml.createElement('field'); field.setAttribute('name', 'NAME'); field.appendChild(document.createTextNode(this.getProcedureCall())); block.appendChild(field); xml.appendChild(block); pxt.blocks.domToWorkspaceNoEvents(xml, this.workspace); Blockly.Events.setGroup(false); } } else if (event.type == Blockly.Events.DELETE) { // Look for the case where a procedure definition has been deleted, // leaving this block (a procedure call) orphaned. In this case, delete // the orphan. let name = this.getProcedureCall(); let def = Blockly.Procedures.getDefinition(name, this.workspace); if (!def) { Blockly.Events.setGroup(event.group); this.dispose(true, false); Blockly.Events.setGroup(false); } } }, mutationToDom: function () { const mutationElement = document.createElement("mutation"); mutationElement.setAttribute("name", this.getProcedureCall()); return mutationElement; }, domToMutation: function (element: Element) { const name = element.getAttribute("name"); this.renameProcedure(this.getProcedureCall(), name); }, /** * Add menu option to find the definition block for this call. * @param {!Array} options List of menu options to add to. * @this Blockly.Block */ customContextMenu: function (options: any) { let option: any = { enabled: true }; option.text = (Blockly as any).Msg.PROCEDURES_HIGHLIGHT_DEF; let name = this.getProcedureCall(); let workspace = this.workspace; option.callback = function () { let def = Blockly.Procedures.getDefinition(name, workspace) as Blockly.BlockSvg; if (def) def.select(); }; options.push(option); }, defType_: 'procedures_defnoreturn' } installBuiltinHelpInfo(proceduresCallId); // New functions implementation function_definition const functionDefinitionId = "function_definition"; const functionDefinition = pxt.blocks.getBlockDefinition(functionDefinitionId); msg.FUNCTIONS_EDIT_OPTION = functionDefinition.block["FUNCTIONS_EDIT_OPTION"]; installBuiltinHelpInfo(functionDefinitionId); // New functions implementation function_call const functionCallId = "function_call"; const functionCall = pxt.blocks.getBlockDefinition(functionCallId); msg.FUNCTIONS_CALL_TITLE = functionCall.block["FUNCTIONS_CALL_TITLE"]; msg.FUNCTIONS_GO_TO_DEFINITION_OPTION = functionCall.block["FUNCTIONS_GO_TO_DEFINITION_OPTION"]; installBuiltinHelpInfo(functionCallId); installBuiltinHelpInfo("function_call_output"); const functionReturnId = "function_return"; Blockly.Blocks[functionReturnId] = { init: function () { initReturnStatement(this); }, onchange: function (event) { const block = this as Blockly.Block; if (!block.workspace || (block.workspace as Blockly.WorkspaceSvg).isFlyout) { // Block is deleted or is in a flyout. return; } const thisWasCreated = event.type === Blockly.Events.BLOCK_CREATE && event.ids.indexOf(block.id) != -1; const thisWasDragged = event.type === Blockly.Events.END_DRAG && event.allNestedIds.indexOf(block.id) != -1; if (thisWasCreated || thisWasDragged) { const rootBlock = block.getRootBlock(); const isTopBlock = rootBlock.type === functionReturnId; if (isTopBlock || rootBlock.previousConnection != null) { // Statement is by itself on the workspace, or it is slotted into a // stack of statements that is not attached to a function or event. Let // it exist until it is connected to a function return; } if (rootBlock.type !== functionDefinitionId) { // Not a function block, so disconnect Blockly.Events.setGroup(event.group); block.previousConnection.disconnect(); Blockly.Events.setGroup(false); } } } }; installBuiltinHelpInfo(functionReturnId); Blockly.Procedures.flyoutCategory = function (workspace: Blockly.WorkspaceSvg) { let xmlList: HTMLElement[] = []; if (!pxt.appTarget.appTheme.hideFlyoutHeadings) { // Add the Heading label let headingLabel = createFlyoutHeadingLabel(lf("Functions"), pxt.toolbox.getNamespaceColor('functions'), pxt.toolbox.getNamespaceIcon('functions'), 'blocklyFlyoutIconfunctions'); xmlList.push(headingLabel); } const newFunction = lf("Make a Function..."); const newFunctionTitle = lf("New function name:"); // Add the "Make a function" button let button = Blockly.utils.xml.createElement('button'); button.setAttribute('text', newFunction); button.setAttribute('callbackKey', 'CREATE_FUNCTION'); let createFunction = (name: string) => { /** * Create matching definition block. * <xml> * <block type="procedures_defreturn" x="10" y="20"> * <field name="NAME">test</field> * </block> * </xml> */ let topBlock = workspace.getTopBlocks(true)[0]; let x = 10, y = 10; if (topBlock) { let xy = topBlock.getRelativeToSurfaceXY(); x = xy.x + (Blockly as any).SNAP_RADIUS * (topBlock.RTL ? -1 : 1); y = xy.y + (Blockly as any).SNAP_RADIUS * 2; } let xml = Blockly.utils.xml.createElement('xml'); let block = Blockly.utils.xml.createElement('block'); block.setAttribute('type', 'procedures_defnoreturn'); block.setAttribute('x', String(x)); block.setAttribute('y', String(y)); let field = Blockly.utils.xml.createElement('field'); field.setAttribute('name', 'NAME'); field.appendChild(document.createTextNode(name)); block.appendChild(field); xml.appendChild(block); let newBlockIds = pxt.blocks.domToWorkspaceNoEvents(xml, workspace); // Close flyout and highlight block Blockly.hideChaff(); let newBlock = workspace.getBlockById(newBlockIds[0]) as Blockly.BlockSvg; newBlock.select(); // Center on the new block so we know where it is workspace.centerOnBlock(newBlock.id); } workspace.registerButtonCallback('CREATE_FUNCTION', function (button) { let promptAndCheckWithAlert = (defaultName: string) => { Blockly.prompt(newFunctionTitle, defaultName, function (newFunc) { pxt.tickEvent('blocks.makeafunction'); // Merge runs of whitespace. Strip leading and trailing whitespace. // Beyond this, all names are legal. if (newFunc) { newFunc = newFunc.replace(/[\s\xa0]+/g, ' ').replace(/^ | $/g, ''); if (newFunc == newFunction) { // Ok, not ALL names are legal... newFunc = null; } } if (newFunc) { if (workspace.getVariable(newFunc)) { Blockly.alert((Blockly as any).Msg.VARIABLE_ALREADY_EXISTS.replace('%1', newFunc.toLowerCase()), function () { promptAndCheckWithAlert(newFunc); // Recurse }); } else if (!Blockly.Procedures.isLegalName_(newFunc, workspace)) { Blockly.alert((Blockly.Msg as any).PROCEDURE_ALREADY_EXISTS.replace('%1', newFunc.toLowerCase()), function () { promptAndCheckWithAlert(newFunc); // Recurse }); } else { createFunction(newFunc); } } }); }; promptAndCheckWithAlert('doSomething'); }); xmlList.push(button as HTMLElement); function populateProcedures(procedureList: any, templateName: any) { for (let i = 0; i < procedureList.length; i++) { let name = procedureList[i][0]; let args = procedureList[i][1]; // <block type="procedures_callnoreturn" gap="16"> // <field name="NAME">name</field> // </block> let block = Blockly.utils.xml.createElement('block'); block.setAttribute('type', templateName); block.setAttribute('gap', '16'); block.setAttribute('colour', pxt.toolbox.getNamespaceColor('functions')); let field = goog.dom.createDom('field', null, name); field.setAttribute('name', 'NAME'); block.appendChild(field); xmlList.push(block as HTMLElement); } } let tuple = Blockly.Procedures.allProcedures(workspace); populateProcedures(tuple[0], 'procedures_callnoreturn'); return xmlList; } // Patch new functions flyout to add the heading const oldFlyout = Blockly.Functions.flyoutCategory; Blockly.Functions.flyoutCategory = (workspace) => { const elems = oldFlyout(workspace); if (elems.length > 1) { let returnBlock = mkReturnStatementBlock(); // Add divider elems.splice(1, 0, createFlyoutGroupLabel("Your Functions")); // Insert after the "make a function" button elems.splice(1, 0, returnBlock as HTMLElement); } const functionsWithReturn = Blockly.Functions.getAllFunctionDefinitionBlocks(workspace) .filter(def => def.getDescendants(false).some(child => child.type === "function_return" && child.getInputTargetBlock("RETURN_VALUE"))) .map(def => def.getField("function_name").getText()) const headingLabel = createFlyoutHeadingLabel(lf("Functions"), pxt.toolbox.getNamespaceColor('functions'), pxt.toolbox.getNamespaceIcon('functions'), 'blocklyFlyoutIconfunctions'); elems.unshift(headingLabel); const res: Element[] = []; for (const e of elems) { res.push(e); if (e.getAttribute("type") === "function_call") { const mutation = e.children.item(0); if (mutation) { const name = mutation.getAttribute("name"); if (functionsWithReturn.some(n => n === name)) { const clone = e.cloneNode(true) as HTMLElement; clone.setAttribute("type", "function_call_output"); res.push(clone); } } } } return res; }; // Configure function editor argument icons const iconsMap: pxt.Map<string> = { number: pxt.blocks.defaultIconForArgType("number"), boolean: pxt.blocks.defaultIconForArgType("boolean"), string: pxt.blocks.defaultIconForArgType("string"), Array: pxt.blocks.defaultIconForArgType("Array") }; const customNames: pxsim.Map<string> = {}; const functionOptions = pxt.appTarget.runtime && pxt.appTarget.runtime.functionsOptions; if (functionOptions && functionOptions.extraFunctionEditorTypes) { functionOptions.extraFunctionEditorTypes.forEach(t => { iconsMap[t.typeName] = t.icon || pxt.blocks.defaultIconForArgType(); if (t.defaultName) { customNames[t.typeName] = t.defaultName; } }); } Blockly.PXTBlockly.FunctionUtils.argumentIcons = iconsMap; Blockly.PXTBlockly.FunctionUtils.argumentDefaultNames = customNames; if (Blockly.Blocks["argument_reporter_custom"]) { // The logic for setting the output check relies on the internals of PXT // too much to be refactored into pxt-blockly, so we need to monkey patch // it here (Blockly.Blocks["argument_reporter_custom"]).domToMutation = function (xmlElement: Element) { const typeName = xmlElement.getAttribute('typename'); this.typeName_ = typeName; setOutputCheck(this, typeName, cachedBlockInfo); }; } /** * Make a context menu option for creating a function call block. * This appears in the context menu for function definitions. * @param {!Blockly.BlockSvg} block The block where the right-click originated. * @return {!Object} A menu option, containing text, enabled, and a callback. * @package */ const makeCreateCallOptionOriginal = (Blockly as any).Functions.makeCreateCallOption; // needs to exist or makeCreateCallOptionOriginal will throw an exception Blockly.Msg.FUNCTIONS_CREATE_CALL_OPTION = ""; (Blockly as any).Functions.makeCreateCallOption = function (block: Blockly.Block) { let option = makeCreateCallOptionOriginal(block); let functionName = block.getField("function_name").getText(); option.text = Util.lf("Create 'call {0}'", functionName); return option; } } function initLogic() { const msg = Blockly.Msg; // builtin controls_if const controlsIfId = "controls_if"; const controlsIfDef = pxt.blocks.getBlockDefinition(controlsIfId); const controlsIfTooltips = <Map<string>>controlsIfDef.tooltip; msg.CONTROLS_IF_MSG_IF = controlsIfDef.block["CONTROLS_IF_MSG_IF"]; msg.CONTROLS_IF_MSG_THEN = controlsIfDef.block["CONTROLS_IF_MSG_THEN"]; msg.CONTROLS_IF_MSG_ELSE = controlsIfDef.block["CONTROLS_IF_MSG_ELSE"]; msg.CONTROLS_IF_MSG_ELSEIF = controlsIfDef.block["CONTROLS_IF_MSG_ELSEIF"]; msg.CONTROLS_IF_TOOLTIP_1 = controlsIfTooltips["CONTROLS_IF_TOOLTIP_1"]; msg.CONTROLS_IF_TOOLTIP_2 = controlsIfTooltips["CONTROLS_IF_TOOLTIP_2"]; msg.CONTROLS_IF_TOOLTIP_3 = controlsIfTooltips["CONTROLS_IF_TOOLTIP_3"]; msg.CONTROLS_IF_TOOLTIP_4 = controlsIfTooltips["CONTROLS_IF_TOOLTIP_4"]; installBuiltinHelpInfo(controlsIfId); // builtin logic_compare const logicCompareId = "logic_compare"; const logicCompareDef = pxt.blocks.getBlockDefinition(logicCompareId); const logicCompareTooltips = <Map<string>>logicCompareDef.tooltip; msg.LOGIC_COMPARE_TOOLTIP_EQ = logicCompareTooltips["LOGIC_COMPARE_TOOLTIP_EQ"]; msg.LOGIC_COMPARE_TOOLTIP_NEQ = logicCompareTooltips["LOGIC_COMPARE_TOOLTIP_NEQ"]; msg.LOGIC_COMPARE_TOOLTIP_LT = logicCompareTooltips["LOGIC_COMPARE_TOOLTIP_LT"]; msg.LOGIC_COMPARE_TOOLTIP_LTE = logicCompareTooltips["LOGIC_COMPARE_TOOLTIP_LTE"]; msg.LOGIC_COMPARE_TOOLTIP_GT = logicCompareTooltips["LOGIC_COMPARE_TOOLTIP_GT"]; msg.LOGIC_COMPARE_TOOLTIP_GTE = logicCompareTooltips["LOGIC_COMPARE_TOOLTIP_GTE"]; installBuiltinHelpInfo(logicCompareId); // builtin logic_operation const logicOperationId = "logic_operation"; const logicOperationDef = pxt.blocks.getBlockDefinition(logicOperationId); const logicOperationTooltips = <Map<string>>logicOperationDef.tooltip; msg.LOGIC_OPERATION_AND = logicOperationDef.block["LOGIC_OPERATION_AND"]; msg.LOGIC_OPERATION_OR = logicOperationDef.block["LOGIC_OPERATION_OR"]; msg.LOGIC_OPERATION_TOOLTIP_AND = logicOperationTooltips["LOGIC_OPERATION_TOOLTIP_AND"]; msg.LOGIC_OPERATION_TOOLTIP_OR = logicOperationTooltips["LOGIC_OPERATION_TOOLTIP_OR"]; installBuiltinHelpInfo(logicOperationId); // builtin logic_negate const logicNegateId = "logic_negate"; const logicNegateDef = pxt.blocks.getBlockDefinition(logicNegateId); msg.LOGIC_NEGATE_TITLE = logicNegateDef.block["LOGIC_NEGATE_TITLE"]; installBuiltinHelpInfo(logicNegateId); // builtin logic_boolean const logicBooleanId = "logic_boolean"; const logicBooleanDef = pxt.blocks.getBlockDefinition(logicBooleanId); msg.LOGIC_BOOLEAN_TRUE = logicBooleanDef.block["LOGIC_BOOLEAN_TRUE"]; msg.LOGIC_BOOLEAN_FALSE = logicBooleanDef.block["LOGIC_BOOLEAN_FALSE"]; installBuiltinHelpInfo(logicBooleanId); } function initText() { // builtin text const textInfo = pxt.blocks.getBlockDefinition('text'); installHelpResources('text', textInfo.name, textInfo.tooltip, textInfo.url, (Blockly as any).Colours.textField, (Blockly as any).Colours.textField, (Blockly as any).Colours.textField); // builtin text_length const msg = Blockly.Msg; const textLengthId = "text_length"; const textLengthDef = pxt.blocks.getBlockDefinition(textLengthId); msg.TEXT_LENGTH_TITLE = textLengthDef.block["TEXT_LENGTH_TITLE"]; // We have to override this block definition because the builtin block // allows both Strings and Arrays in its input check and that confuses // our Blockly compiler let block = Blockly.Blocks[textLengthId]; block.init = function () { this.jsonInit({ "message0": msg.TEXT_LENGTH_TITLE, "args0": [ { "type": "input_value", "name": "VALUE", "check": ['String'] } ], "output": 'Number', "outputShape": Blockly.OUTPUT_SHAPE_ROUND }); } installBuiltinHelpInfo(textLengthId); // builtin text_join const textJoinId = "text_join"; const textJoinDef = pxt.blocks.getBlockDefinition(textJoinId); msg.TEXT_JOIN_TITLE_CREATEWITH = textJoinDef.block["TEXT_JOIN_TITLE_CREATEWITH"]; installBuiltinHelpInfo(textJoinId); } function initDebugger() { Blockly.Blocks[pxtc.TS_DEBUGGER_TYPE] = { init: function () { let that: Blockly.Block = this; that.setColour(pxt.toolbox.getNamespaceColor('debug')) that.setPreviousStatement(true); that.setNextStatement(true); that.setInputsInline(false); that.appendDummyInput('ON_OFF') .appendField(new Blockly.FieldLabel(lf("breakpoint"), undefined), "DEBUGGER") .appendField(new pxtblockly.FieldBreakpoint("1", { 'type': 'number' } as any), "ON_OFF"); setHelpResources(this, pxtc.TS_DEBUGGER_TYPE, lf("Debugger statement"), lf("A debugger statement invokes any available debugging functionality"), '/javascript/debugger', pxt.toolbox.getNamespaceColor('debug') ); } }; } function initComments() { Blockly.Msg.WORKSPACE_COMMENT_DEFAULT_TEXT = ''; } function initTooltip() { const renderTip = (el: any) => { if (el.disabled) return lf("This block is disabled and will not run. Attach this block to an event to enable it.") let tip = el.tooltip; while (goog.isFunction(tip)) { tip = tip(el); } return tip; } /** * Override Blockly tooltip rendering with our own. * TODO shakao check if tooltip can be modified in a cleaner way * @private */ (Blockly.Tooltip as any).show_ = function () { const BlocklyTooltip = Blockly.Tooltip as any; BlocklyTooltip.poisonedElement_ = BlocklyTooltip.element_; if (!Blockly.Tooltip.DIV) { return; } // Erase all existing text. goog.dom.removeChildren(/** @type {!Element} */(Blockly.Tooltip.DIV)); // Get the new text. const card = BlocklyTooltip.element_.codeCard as pxt.CodeCard; function render() { let rtl = BlocklyTooltip.element_.RTL; let windowSize = goog.dom.getViewportSize(); // Display the tooltip. let tooltip = Blockly.Tooltip.DIV as HTMLElement; tooltip.style.direction = rtl ? 'rtl' : 'ltr'; tooltip.style.display = 'block'; Blockly.Tooltip.visible = true; // Move the tooltip to just below the cursor. let anchorX = BlocklyTooltip.lastX_; if (rtl) { anchorX -= Blockly.Tooltip.OFFSET_X + tooltip.offsetWidth; } else { anchorX += Blockly.Tooltip.OFFSET_X; } let anchorY = BlocklyTooltip.lastY_ + Blockly.Tooltip.OFFSET_Y; if (anchorY + tooltip.offsetHeight > windowSize.height + window.scrollY) { // Falling off the bottom of the screen; shift the tooltip up. anchorY -= tooltip.offsetHeight + 2 * Blockly.Tooltip.OFFSET_Y; } if (rtl) { // Prevent falling off left edge in RTL mode. anchorX = Math.max(Blockly.Tooltip.MARGINS - window.scrollX, anchorX); } else { if (anchorX + tooltip.offsetWidth > windowSize.width + window.scrollX - 2 * Blockly.Tooltip.MARGINS) { // Falling off the right edge of the screen; // clamp the tooltip on the edge. anchorX = windowSize.width - tooltip.offsetWidth - 2 * Blockly.Tooltip.MARGINS; } } tooltip.style.top = anchorY + 'px'; tooltip.style.left = anchorX + 'px'; } if (card) { const cardEl = pxt.docs.codeCard.render({ header: renderTip(BlocklyTooltip.element_) }) Blockly.Tooltip.DIV.appendChild(cardEl); render(); } else { let tip = renderTip(BlocklyTooltip.element_); tip = Blockly.utils._string.wrap(tip, Blockly.Tooltip.LIMIT); // Create new text, line by line. let lines = tip.split('\n'); for (let i = 0; i < lines.length; i++) { let div = document.createElement('div'); div.appendChild(document.createTextNode(lines[i])); Blockly.Tooltip.DIV.appendChild(div); } render(); } } } function removeBlock(fn: pxtc.SymbolInfo) { delete Blockly.Blocks[fn.attributes.blockId]; delete cachedBlocks[fn.attributes.blockId]; } /** * <block type="pxt_wait_until"> * <value name="PREDICATE"> * <shadow type="logic_boolean"> * <field name="BOOL">TRUE</field> * </shadow> * </value> * </block> */ export function mkPredicateBlock(type: string) { const block = document.createElement("block"); block.setAttribute("type", type); const value = document.createElement("value"); value.setAttribute("name", "PREDICATE"); block.appendChild(value); const shadow = mkFieldBlock("logic_boolean", "BOOL", "TRUE", true); value.appendChild(shadow); return block; } export function mkFieldBlock(type: string, fieldName: string, fieldValue: string, isShadow: boolean) { const fieldBlock = document.createElement(isShadow ? "shadow" : "block"); fieldBlock.setAttribute("type", Util.htmlEscape(type)); const field = document.createElement("field"); field.setAttribute("name", Util.htmlEscape(fieldName)); field.textContent = Util.htmlEscape(fieldValue); fieldBlock.appendChild(field); return fieldBlock; } export function mkReturnStatementBlock() { const block = document.createElement("block"); block.setAttribute("type", "function_return"); const value = document.createElement("value"); value.setAttribute("name", "RETURN_VALUE"); block.appendChild(value); const shadow = mkFieldBlock("math_number", "NUM", "0", true); value.appendChild(shadow); return block; } let jresIconCache: Map<string> = {}; function iconToFieldImage(id: string): Blockly.FieldImage { let url = jresIconCache[id]; if (!url) { pxt.log(`missing jres icon ${id}`) return undefined; } return new Blockly.FieldImage(url, 40, 40, '', null, Util.isUserLanguageRtl()); } function initJresIcons(blockInfo: pxtc.BlocksInfo) { jresIconCache = {}; // clear previous cache const jres = blockInfo.apis.jres; if (!jres) return; Object.keys(jres).forEach((jresId) => { const jresObject = jres[jresId]; if (jresObject && jresObject.icon) jresIconCache[jresId] = jresObject.icon; }) } function splitInputs(def: pxtc.ParsedBlockDef): pxtc.BlockContentPart[][] { const res: pxtc.BlockContentPart[][] = []; let current: pxtc.BlockContentPart[] = []; def.parts.forEach(part => { switch (part.kind) { case "break": newInput(); break; case "param": current.push(part); newInput(); break; case "image": case "label": current.push(part); break; } }); newInput(); return res; function newInput() { if (current.length) { res.push(current); current = []; } } } function namedField(field: Blockly.Field, name: string): NamedField { return { field, name }; } function getEnumDropdownValues(apis: pxtc.ApisInfo, enumName: string) { return pxt.Util.values(apis.byQName).filter(sym => sym.namespace === enumName && !sym.attributes.blockHidden); } export function getFixedInstanceDropdownValues(apis: pxtc.ApisInfo, qName: string) { const symbols = pxt.Util.values(apis.byQName).filter(sym => sym.kind === pxtc.SymbolKind.Variable && sym.attributes.fixedInstance && isSubtype(apis, sym.retType, qName)) .sort((l,r) => (r.attributes.weight || 50) - (l.attributes.weight || 50)) return symbols } export function generateIcons(instanceSymbols: pxtc.SymbolInfo[]) { const imgConv = new ImageConverter(); instanceSymbols.forEach(v => { if (v.attributes.jresURL && !v.attributes.iconURL && U.startsWith(v.attributes.jresURL, "data:image/x-mkcd-f")) { v.attributes.iconURL = imgConv.convert(v.attributes.jresURL) } }); } function getConstantDropdownValues(apis: pxtc.ApisInfo, qName: string) { return pxt.Util.values(apis.byQName).filter(sym => sym.attributes.blockIdentity === qName); } // Trims off a single space from beginning and end (if present) function removeOuterSpace(str: string) { if (str === " ") { return ""; } else if (str.length > 1) { const startSpace = str.charAt(0) == " "; const endSpace = str.charAt(str.length - 1) == " "; if (startSpace || endSpace) { return str.substring(startSpace ? 1 : 0, endSpace ? str.length - 1 : str.length); } } return str; } /** * Blockly variable fields can't be set directly; you either have to use the * variable ID or set the value of the model and not the field */ export function setVarFieldValue(block: Blockly.Block, fieldName: string, newName: string) { const varField = block.getField(fieldName); // Check for an existing model with this name; otherwise we'll create // a second variable with the same name and it will show up twice in the UI const vars = block.workspace.getAllVariables(); let foundIt = false; if (vars && vars.length) { for (let v = 0; v < vars.length; v++) { const model = vars[v]; if (model.name === newName) { varField.setValue(model.getId()); foundIt = true; } } } if (!foundIt) { (varField as any).initModel(); const model = (varField as any).getVariable(); model.name = newName; varField.setValue(model.getId()); } } export function getBlockData(block: Blockly.Block): PXTBlockData { if (!block.data) { return { commentRefs: [], fieldData: {} }; } if (/^(?:\d+;?)+$/.test(block.data)) { return { commentRefs: block.data.split(";"), fieldData: {} } } return JSON.parse(block.data); } export function setBlockData(block: Blockly.Block, data: PXTBlockData) { block.data = JSON.stringify(data); } export function setBlockDataForField(block: Blockly.Block, field: string, data: string) { const blockData = getBlockData(block); blockData.fieldData[field] = data; setBlockData(block, blockData); } export function getBlockDataForField(block: Blockly.Block, field: string) { return getBlockData(block).fieldData[field]; } export class PxtWorkspaceSearch extends WorkspaceSearch { protected createDom_() { super.createDom_(); this.addEvent_(this.workspace_.getInjectionDiv(), "click", this, (e: any) => { if (this.htmlDiv_.style.display == "flex" && !this.htmlDiv_.contains(e.target)) { this.close() } }); } protected highlightSearchGroup_(blocks: Blockly.BlockSvg[]) { blocks.forEach((block) => { const blockPath = block.pathObject.svgPath; Blockly.utils.dom.addClass(blockPath, 'blockly-ws-search-highlight-pxt'); }); } protected unhighlightSearchGroup_(blocks: Blockly.BlockSvg[]) { blocks.forEach((block) => { const blockPath = block.pathObject.svgPath; Blockly.utils.dom.removeClass(blockPath, 'blockly-ws-search-highlight-pxt'); }); } /** * https://github.com/google/blockly-samples/blob/master/plugins/workspace-search/src/WorkspaceSearch.js#L633 * * Modified to center offscreen blocks. */ protected scrollToVisible_(block: Blockly.BlockSvg) { if (!this.workspace_.isMovable()) { // Cannot scroll to block in a non-movable workspace. return; } // XY is in workspace coordinates. const xy = block.getRelativeToSurfaceXY(); const scale = this.workspace_.scale; // Block bounds in pixels relative to the workspace origin (0,0 is centre). const width = block.width * scale; const height = block.height * scale; const top = xy.y * scale; const bottom = (xy.y + block.height) * scale; // In RTL the block's position is the top right of the block, not top left. const left = this.workspace_.RTL ? xy.x * scale - width : xy.x * scale; const right = this.workspace_.RTL ? xy.x * scale : xy.x * scale + width; const metrics = this.workspace_.getMetrics(); let targetLeft = metrics.viewLeft; const overflowLeft = left < metrics.viewLeft; const overflowRight = right > metrics.viewLeft + metrics.viewWidth; const wideBlock = width > metrics.viewWidth; if ((!wideBlock && overflowLeft) || (wideBlock && !this.workspace_.RTL)) { // Scroll to show left side of block targetLeft = left; } else if ((!wideBlock && overflowRight) || (wideBlock && this.workspace_.RTL)) { // Scroll to show right side of block targetLeft = right - metrics.viewWidth; } let targetTop = metrics.viewTop; const overflowTop = top < metrics.viewTop; const overflowBottom = bottom > metrics.viewTop + metrics.viewHeight; const tallBlock = height > metrics.viewHeight; if (overflowTop || (tallBlock && overflowBottom)) { // Scroll to show top of block targetTop = top; } else if (overflowBottom) { // Scroll to show bottom of block targetTop = bottom - metrics.viewHeight; } if (targetLeft !== metrics.viewLeft || targetTop !== metrics.viewTop) { const activeEl = document.activeElement as HTMLElement; if (wideBlock || tallBlock) { this.workspace_.scroll(-targetLeft, -targetTop); } else { this.workspace_.centerOnBlock(block.id); } if (activeEl) { // Blockly.WidgetDiv.hide called in scroll is taking away focus. // TODO: Review setFocused call in Blockly.WidgetDiv.hide. activeEl.focus(); } } } open() { super.open(); this.inputElement_.select(); Blockly.utils.dom.addClass(this.workspace_.getInjectionDiv(), 'blockly-ws-searching'); } close() { super.close(); Blockly.utils.dom.removeClass(this.workspace_.getInjectionDiv(), 'blockly-ws-searching'); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,900
module CharacterBuilder.Abilities { export function AbilitiesDirective(): angular.IDirective { return { scope: { character: "=" }, replace: true, restrict: 'E', templateUrl: "components/abilities/abilities.tpl.html" }; } angular.module('characterBuilderApp') .directive('abilities', AbilitiesDirective); }
{ "redpajama_set_name": "RedPajamaGithub" }
1,783
<!DOCTYPE HTML> <html> <head> <meta charset="utf-8"> <title>Page 3 | S-CHAN&#39;s BLG.</title> <meta name="author" content="S-CHAN"> <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1"> <meta property="og:site_name" content="S-CHAN&#39;s BLG."/> <meta property="og:image" content=""/> <link href="/favicon.png" rel="icon"> <!-- CSS --> <link rel="stylesheet" href="/css/themes/sandstone.css" media="screen" type="text/css"> <link rel="stylesheet" href="/css/font-awesome.css" media="screen" type="text/css"> <link rel="stylesheet" href="/css/style.css" media="screen" type="text/css"> <link rel="stylesheet" href="/css/responsive.css" media="screen" type="text/css"> <link rel="stylesheet" href="/css/highlight.css" media="screen" type="text/css"> <link rel="stylesheet" href="/css/highlight-default.min.css" media="screen" type="text/css"> <link rel="stylesheet" href="/css/google-fonts.css" media="screen" type="text/css"> <link rel="stylesheet" href="/css/comment.css" media="screen" type="text/css"> <!--[if lt IE 9]> <script src="//html5shiv.googlecode.com/svn/trunk/html5.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/es5-shim/4.5.9/es5-shim.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/es5-shim/4.5.7/es5-sham.min.js"></script> <![endif]--> <script src="/js/jquery-2.0.3.min.js"></script> <script src="/js/marked.js"></script> <script src="/js/comment.js"></script> <script src="/js/timeago.min.js"></script> <script src="/js/highlight.min.js"></script> <script src="/js/spin.min.js"></script> <!-- analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-106404572-1', 'auto'); ga('send', 'pageview'); </script> </head> <body> <nav id="main-nav" class="navbar navbar-default navbar-fixed-top" role="navigation"> <div class="container"> <button type="button" class="navbar-header navbar-toggle" data-toggle="collapse" data-target=".navbar-collapse"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="/">S-CHAN&#39;s BLG.</a> <div class="collapse navbar-collapse nav-menu"> <ul class="nav navbar-nav"> <li> <a href="/archives" title="All the articles."> <i class="fa fa-archive"></i>Archives </a> </li> <li> <a href="/categories" title="All the categories."> <i class="fa fa-folder"></i>Categories </a> </li> <li> <a href="/tags" title="All the tags."> <i class="fa fa-tags"></i>Tags </a> </li> <li> <a href="/Dev-Roadmap" title="Dev-Roadmap."> <i class="fa fa-code-fork"></i>Dev-Roadmap </a> </li> <li> <a href="/ENG-Study-Ref" title="ENG-Study-Ref."> <i class="fa fa-globe"></i>ENG-Study-Ref </a> </li> <li> <a href="/re-invent-js" title="Re-Invent-JS."> <i class="fa fa-refresh"></i>Re-Invent-JS </a> </li> <li> <a href="/API&GraphQL" title="API&GraphQL."> <i class="fa fa-arrows"></i>API&GraphQL </a> </li> <li> <a href="/PRJTs" title="PRJTs by me."> <i class="fa fa-tasks"></i>PRJTs </a> </li> </ul> </div> </div> <!-- container --> </nav> <div class="clearfix"></div> <div class="container"> <div class="content"> <div class="page-header "> <h1 class="title ">Be a Giant for DevOps</h1> </div> <div class="row page"> <div class="col-md-9"> <div class="slogan"> <i class="fa fa-heart"></i> Dare to Learn! Learn to Dare! &gt; POST thumbnail Below... </div> <div id="top_search"></div> <div class="mypage"> <!-- title and entry --> <!-- render top articles firstly --> <!-- render other articles --> <!-- display as entry --> <h3 class="title"> <div class="date"> 2017-09-19 </div> <div class="article-title"><a href="/2017/09/19/TIL-From-17-09-15/" title="TIL Log 입니다. 오늘 한 일을 기록합니다.">TIL From 17.09.15</a></div> </h3> <div class="entry"> <div class="row"> <h2 id="9weol-15il-geum"><a href="#9weol-15il-geum" class="header-anchor">1. </a><a href="#9월-15일-금" class="headerlink" title="9월 15일 (금)"></a>9월 15일 (금)</h2><ul> <li>Ruby 기반 Jekyll Test Blog 를 Node 기반 Hexo 로 변경<ul> <li>Ref site 참고사항 및 기타 BLOG 설정완료<blockquote> <p><a href="https://www.holaxprogramming.com/2017/04/16/github-page-and-hexo/" target="_blank" rel="external">holaxprogramming.com/github-page-and-hexo/</a><br><a href="https://hexo.io/ko/docs/commands.html" target="_blank" rel="external">hexo.io/ko/docs/commands</a><br><a href="http://blog.lattecom.xyz/2016/06/28/hexo-blog-github-pages/" target="_blank" rel="external">blog.lattecom.xyz/hexo-blog-github-pages/</a><br><a href="http://wit.nts-corp.com/2013/09/10/148" target="_blank" rel="external">wit.nts-corp.com/~</a><br><a href="https://m.blog.naver.com/PostView.nhn?blogId=future_creator&amp;logNo=220722153999&amp;proxyReferer=https%3A%2F%2Fwww.google.co.kr%2F" target="_blank" rel="external">m.blog.naver.com/~</a></p> </blockquote> </li> </ul> </li> </ul> <ul> <li><p><a href="https://www.codecademy.com/pro/intensive/build-frontend-web-apps-from-scratch?ubv=upgrdsbwa" target="_blank" rel="external">Codecademy Pro Intensive &gt; Build Front-End Web Apps</a> ~ Day 8 까지 완료</p> </li> <li><p>Aws root 계정 MFA 적용 및 IAM 관리자계정 설정, Only IAM 로그인 &amp; AWS CLI 관련패치 </p> <ul> <li>아래 보안사항 모두적용 완료<blockquote> <p><a href="http://www.awskr.org/2017/01/your-aws-first-days-todo-list/" target="_blank" rel="external">AWS 계정을 만들고 가장 먼저 해야 할 일 들과 하지 말아야 할 일 들</a></p> </blockquote> </li> </ul> </li> </ul> <hr> <h2 id="9weol-18il-weol"><a href="#9weol-18il-weol" class="header-anchor">2. </a><a href="#9월-18일-월" class="headerlink" title="9월 18일 (월)"></a>9월 18일 (월)</h2><ul> <li>QuizAWS Project 관련 Aws &gt; Codestar &gt; Serverless express framework 코드생성</li> <li>Codestar &gt; Codecommit repo 와 github repo 간의 동기화 또는 CodePipeLine 작업처리 고민</li> <li>serverless-with-graphql 관련자료들 확인… <blockquote> <p><a href="https://github.com/chan48/serverless-graphql-nodejs-template" target="_blank" rel="external">serverless-graphql-nodejs-template</a><br><a href="https://github.com/chan48/serverless-graphql-blog" target="_blank" rel="external">serverless-graphql-blog</a><br><a href="https://github.com/chan48/aws-sam-local" target="_blank" rel="external">aws-sam-local</a><br><a href="https://acloud.guru/learn/serverless-with-graphql" target="_blank" rel="external">serverless-with-graphql</a></p> </blockquote> </li> </ul> <hr> <h2 id="9weol-19il-hwa"><a href="#9weol-19il-hwa" class="header-anchor">3. </a><a href="#9월-19일-화" class="headerlink" title="9월 19일 (화)"></a>9월 19일 (화)</h2><ul> <li>Taskworld Kanban (like trello) 모듈 작업계획 수정갱신 및 일정에 따른 학습/작업 우선순위 고민 해보기</li> <li>Quiz-AWS Project study<blockquote> <p>FrontEnd Study &gt; <a href="https://www.codecademy.com/pro/intensive/build-frontend-web-apps-from-scratch?ubv=upgrdsbwa" target="_blank" rel="external">Codecademy Pro Intensive &gt; Build Front-End Web Apps</a> : From Day 9 ~<br>BackEnd Study &gt; About AWS &gt; <a href="https://acloud.guru/learn/serverless-with-graphql" target="_blank" rel="external">serverless-with-graphql</a>: Day 1 (CHAPTER 1&gt; Section 1) </p> </blockquote> </li> </ul> </div> <a type="button" href="/2017/09/19/TIL-From-17-09-15/#more" class="btn btn-default more">Read More</a> </div> </div> <!-- pagination --> <div> <center> <div class="pagination"> <ul class="pagination"> <li class="prev"><a href="/page/2/" class="alignleft prev"><i class="fa fa-arrow-circle-o-left"></i> Prev</a></li> <li><a href="/"><i class="fa fa-home"></i>Home</a></li> <li class="next disabled"><a>Next<i class="fa fa-arrow-circle-o-right"></i></a></li> </ul> </div> </center> </div> </div> <!-- col-md-9 --> <div class="col-md-3"> <div id="sidebar"> <div id="site_search"> <div class="form-group"> <input type="text" id="local-search-input" name="q" results="0" placeholder="Search" class="st-search-input st-default-search-input form-control"/> </div> <div id="local-search-result"></div> </div> <div class="widget"> <h4 class="dsq-widget-title">Recent Comments</h4> <div id="recent-comments"></div> <script type="text/javascript"> getRecentCommentsList({ type: "github" ? "github" : "github", user: "chan48", repo: "chan48.github.io", client_id: "7179d6ca581e24cefa48", client_secret: "c48a9967591f833abb9b901f752e88875d8830f9", count: "5" ? "5" : 5, recent_comments_target: "#recent-comments" }); </script> </div> <div class="widget"> <h4>Categories</h4> <ul class="tag_box inline list-unstyled"> <li><a href="/categories/TIL/">TIL<span>3</span></a></li> <li><a href="/categories/TIL-THIS-WEEK-LOG/">TIL/THIS-WEEK-LOG<span>3</span></a></li> <li><a href="/categories/retrospect-methodology/">retrospect/methodology<span>1</span></a></li> <li><a href="/categories/test-ref-ex/">test/ref-ex<span>1</span></a></li> </ul> </div> <div class="widget"> <h4>Tag Cloud</h4> <ul class="tag_box inline list-unstyled"> <li><a href="/tags/회고/">회고<span>1</span></a></li> <li><a href="/tags/graghQL/">graghQL<span>5</span></a></li> <li><a href="/tags/Agile/">Agile<span>1</span></a></li> <li><a href="/tags/ref-ex/">ref-ex<span>1</span></a></li> <li><a href="/tags/serverless/">serverless<span>5</span></a></li> <li><a href="/tags/Webpack/">Webpack<span>1</span></a></li> <li><a href="/tags/FrontEnd/">FrontEnd<span>1</span></a></li> <li><a href="/tags/THIS-WEEK-LOG/">THIS-WEEK-LOG<span>3</span></a></li> <li><a href="/tags/study/">study<span>1</span></a></li> <li><a href="/tags/Demo-test/">Demo-test<span>1</span></a></li> <li><a href="/tags/start/">start<span>1</span></a></li> <li><a href="/tags/restart/">restart<span>1</span></a></li> <li><a href="/tags/qiuzAWS/">qiuzAWS<span>6</span></a></li> <li><a href="/tags/TIL-LOG/">TIL-LOG<span>6</span></a></li> <li><a href="/tags/pin/">pin<span>1</span></a></li> <li><a href="/tags/학습론/">학습론<span>1</span></a></li> <li><a href="/tags/AWS/">AWS<span>6</span></a></li> <li><a href="/tags/PROJECT/">PROJECT<span>6</span></a></li> </ul> </div> <div class="widget"> <h4>Recent Posts</h4> <ul class="entry list-unstyled"> <li> <a href="/2017/11/27/TWIL-From-17-11-20/" title="Study Log 입니다. 금주 공부했던 것을 기록합니다." ><i class="fa fa-file-o"></i>TWIL (This Week I Learned) ...</a> </li> <li> <a href="/2017/11/10/TWIL-From-17-10-30/" title="Study Log 입니다. 금주 공부했던 것을 기록합니다." ><i class="fa fa-file-o"></i>TWIL (This Week I Learned) ...</a> </li> <li> <a href="/2017/10/22/TWIL-From-17-10-11/" title="Study Log 입니다. 금주 공부했던 것을 기록합니다." ><i class="fa fa-file-o"></i>TWIL (This Week I Learned) ...</a> </li> <li> <a href="/2017/10/17/WITH-AGILE/" title="지난 3~4 개월간의 Javascript 학습과 개인프로젝트 진행관련 개발방법을 고민해 보며 쓴 회고록 입니다." ><i class="fa fa-file-o"></i>삽질주도 학습 WITH 애자일 개발</a> </li> <li> <a href="/2017/10/01/TIL-From-17-09-25/" title="TIL Log 입니다. 오늘 한 일을 기록합니다." ><i class="fa fa-file-o"></i>TIL From 17.09.25</a> </li> </ul> </div> <div class="widget"> <h4>Links</h4> <ul class="blogroll list-unstyled"> <li><i class="fa fa-book"></i><a href="https://hexo.io/ko/docs/asset-folders.html" title="HEXO-DOCs." target="_blank"]);">HEXOBLG-DOCs</a></li> <li><i class="fa fa-book"></i><a href="https://git-scm.com/docs" title="GIT-DOCs." target="_blank"]);">GIT-DOCs</a></li> <li><i class="fa fa-pencil-square"></i><a href="http://prose.io/#chan48" title="MD-GitHub." target="_blank"]);">MD-GitHub</a></li> <li><i class="fa fa-pencil-square"></i><a href="https://stackedit.io/editor" title="MD-DraftBX." target="_blank"]);">MD-DraftBX</a></li> <li><i class="fa fa-hdd-o"></i><a href="https://drive.google.com/drive/folders/0B5EzAQSIAOQgeFp6OWlRZVluOUk" title="Google-DRV." target="_blank"]);">Google-DRV</a></li> <li><i class="fa fa-trello"></i><a href="https://enterprise.taskworld.com/s-chan/#/projects" title="Taskworld-as-trello." target="_blank"]);">Taskworld-as-trello</a></li> <li><i class="fa fa-sitemap"></i><a href="https://mind42.com/" title="Mind42-MindMap." target="_blank"]);">Mind42-MindMap</a></li> <li><i class="fa fa-pencil-square-o"></i><a href="https://dynalist.io/" title="Editing-Dynalist." target="_blank"]);">Editing-Dynalist</a></li> <li><i class="fa fa-file-text"></i><a href="https://www.gitbook.com/@chan48/dashboard" title="Editing-GitBook." target="_blank"]);">Editing-GitBook</a></li> <li><i class="fa fa-file-text"></i><a href="https://translate.google.com/toolkit/list?hl=ko#translations/active" title="G-Translator Toolkit." target="_blank"]);">G-Translator Toolkit</a></li> <li><i class="fa fa-file-text"></i><a href="https://guides.github.com/features/mastering-markdown/#examples" title="My Linkin account." target="_blank"]);">Markdown-Mastering</a></li> <li><i class="fa fa-picture-o"></i><a href="https://jdittrich.github.io/quickMockup/" title="QuickMockup." target="_blank"]);">QuickMockup</a></li> <li><i class="fa fa-file-image-o"></i><a href="http://fontawesome.io/icons/" title="Awesome icon." target="_blank"]);">Awesome-Icon</a></li> <li><i class="fa fa-pie-chart"></i><a href="http://picresize.com/" title="Image-Optimizer." target="_blank"]);">Image-Optimizer</a></li> <li><i class="fa fa-plug"></i><a href="https://vimawesome.com/" title="Awesome-Vim." target="_blank"]);">Awesome-Vim</a></li> <li><i class="fa fa-star"></i><a href="https://github.com/sarojaba/awesome-devblog" title="Blog:Awesome-Dev." target="_blank"]);">Blog:Awesome-Dev</a></li> <li><i class="fa fa-pencil"></i><a href="https://brunch.co.kr/magazine/devops" title="Blog:AWS-DevOps." target="_blank"]);">Blog:DevO:AWSps</a></li> <li><i class="fa fa-pencil"></i><a href="http://agile.egloos.com/5749946" title="Blog:애자일이야기." target="_blank"]);">Blog:애자일이야기</a></li> <li><i class="fa fa-th"></i><a href="https://pivotal.io/kr/agile" title="Pivotal-Agile." target="_blank"]);">Pivotal-Agile</a></li> <li><i class="fa fa-th"></i><a href="https://quiz-aws.atlassian.net/secure/Dashboard.jspa" title="Jira-Atlassian." target="_blank"]);">Jira-Atlassian</a></li> <li><i class="fa fa-th"></i><a href="https://12factor.net/ko/" title="12factor-APP." target="_blank"]);">12 Factor-APP</a></li> <li><i class="fa fa-th"></i><a href="https://stackoverflow.com/jobs/salary" title="Salary-Calculate." target="_blank"]);">Salary-Calculate</a></li> <li><i class="fa fa-list-alt"></i><a href="https://www.freecodecamp.org/chan48" title="FreecodeCamp." target="_blank"]);">FreecodeCamp</a></li> <li><i class="fa fa-list-alt"></i><a href="https://hackernoon.com/javascript/home" title="Hacker-Noon." target="_blank"]);">Hacker-Noon</a></li> <li><i class="fa fa-list-alt"></i><a href="https://acloud.guru/courses" title="AcloudGuru." target="_blank"]);">AcloudGuru</a></li> <li><i class="fa fa-list-alt"></i><a href="https://www.udemy.com/home/my-courses/learning/" title="Udemy." target="_blank"]);">Udemy</a></li> <li><i class="fa fa-list-alt"></i><a href="https://www.inflearn.com/members/chan-3/course/" title="Inflearn." target="_blank"]);">Inflearn</a></li> <li><i class="fa fa-list-alt"></i><a href="https://nomade.kr/accounts/" title="AskDjango." target="_blank"]);">AskDjango-PY</a></li> <li><i class="fa fa-list-alt"></i><a href="http://academy.nomadcoders.co/courses/enrolled" title="Nomadcoders." target="_blank"]);">Nomadcoders-PY</a></li> <li><i class="fa fa-list-alt"></i><a href="http://www.imagineer.io/p/python" title="Marco-imagineer-PY." target="_blank"]);">Marco-imagineer-PY</a></li> <li><i class="fa fa-list-alt"></i><a href="https://www.codecademy.com/learn" title="Codecademy." target="_blank"]);">Codecademy</a></li> <li><i class="fa fa-list-alt"></i><a href="https://www.codecademy.com/courses/javascript-lesson-205/0/1" title="RecursionJS." target="_blank"]);">RecursionJS</a></li> <li><i class="fa fa-list-alt"></i><a href="https://classroom.udacity.com/me" title="Udacity." target="_blank"]);">Udacity</a></li> <li><i class="fa fa-list-alt"></i><a href="https://www.coursera.org/" title="Coursera." target="_blank"]);">Coursera</a></li> <li><i class="fa fa-list-alt"></i><a href="https://www.edx.org/" title="EDX." target="_blank"]);">EDX</a></li> <li><i class="fa fa-list-alt"></i><a href="https://opentutorials.org/course/2136" title="생활코딩." target="_blank"]);">생활코딩</a></li> <li><i class="fa fa-list-alt"></i><a href="https://ko.khanacademy.org/computing" title="칸아카데미." target="_blank"]);">칸아카데미</a></li> <li><i class="fa fa-users"></i><a href="https://com-chan.signin.aws.amazon.com/console" title="Amazon-AWS-IAM." target="_blank"]);">AWS-IAM</a></li> <li><i class="fa fa-credit-card"></i><a href="https://aws.amazon.com/ko/premiumsupport/compare-plans/" title="AWS-Support Plans." target="_blank"]);">AWS-Support-Plans</a></li> <li><i class="fa fa-cloud"></i><a href="https://aws.amazon.com/ko/faqs/" title="AWS-FAQ." target="_blank"]);">AWS-FAQ</a></li> <li><i class="fa fa-cloud"></i><a href="https://github.com/serithemage/AWSCertifiedSolutionsArchitectUnofficialStudyGuide" title="AWS-AWSCertified-Guide." target="_blank"]);">AWS-AWSCertified-Guide</a></li> <li><i class="fa fa-cloud"></i><a href="https://aws.amazon.com/ko/certification/certification-prep/" title="AWS-AWSCertified-Guide2." target="_blank"]);">AWS-AWSCertified-Guide2</a></li> <li><i class="fa fa-cloud"></i><a href="https://kyupokaws.wordpress.com/2017/01/20/aws-certified-solution-architect-associatesaa-%ED%95%A9%EA%B2%A9-%ED%9B%84%EA%B8%B0/" title="AWS-AWSCertified-Blog." target="_blank"]);">AWS-AWSCertified-Blog</a></li> <li><i class="fa fa-cloud"></i><a href="https://aws.amazon.com/ko/whitepapers/" title="AWS-Whitepapers." target="_blank"]);">AWS-Whitepapers</a></li> <li><i class="fa fa-cloud"></i><a href="https://aws.amazon.com/ko/blogs/korea/ko-whitepapers/" title="AWS-Whitepapers-list." target="_blank"]);">AWS-Whitepapers-list</a></li> <li><i class="fa fa-cloud"></i><a href="https://aws.amazon.com/ko/documentation/" title="AWS-Tutorial-DOC." target="_blank"]);">AWS-Tutorial-DOC</a></li> <li><i class="fa fa-cloud"></i><a href="https://aws.amazon.com/ko/blogs/korea/category/webinar/" title="AWS-Webinar." target="_blank"]);">AWS-Webinar</a></li> <li><i class="fa fa-cloud"></i><a href="https://www.slideshare.net/awskorea/presentations" title="AWSKR-SlideShare." target="_blank"]);">AWSKR-SlideShare</a></li> <li><i class="fa fa-cloud"></i><a href="https://qwiklabs.com/catalog" title="AWS-Study." target="_blank"]);">AWS-HandsOn-Qwiklabs</a></li> <li><i class="fa fa-cloud"></i><a href="https://console.run.pivotal.io/organizations/3ff75be5-8de8-4521-a2e6-0dccb6cbe1eb" title="PIVOTAL-Cloud Foundry." target="_blank"]);">PIVOTAL-Cloud Foundry</a></li> <li><i class="fa fa-book"></i><a href="https://iam.cloudonaut.io/" title="AWS IAM Reference." target="_blank"]);">AWS IAM Reference</a></li> <li><i class="fa fa-book"></i><a href="https://aws.amazon.com/ko/quickstart/architecture/" title="AWS Quick Starts." target="_blank"]);">AWS Quick Starts</a></li> <li><i class="fa fa-book"></i><a href="http://docs.aws.amazon.com/ko_kr/cli/latest/userguide/getting-help.html" title="DOCs-AWS-CLI." target="_blank"]);">DOCs:AWS-CLI</a></li> <li><i class="fa fa-book"></i><a href="http://docs.aws.amazon.com/ko_kr/codecommit/latest/userguide/setting-up.html" title="DOCs:AWS-CODECOMMIT." target="_blank"]);">DOCs:AWS-CODECOMMIT</a></li> <li><i class="fa fa-book"></i><a href="http://docs.aws.amazon.com/ko_kr/codepipeline/latest/userguide/welcome.html" title="DOCs:AWS-CODEPIPELINE." target="_blank"]);">DOCs:AWS-CODEPIPELINE</a></li> <li><i class="fa fa-book"></i><a href="http://docs.aws.amazon.com/ko_kr/Route53/latest/APIReference/Welcome.html" title="DOCs:AWS-Route53." target="_blank"]);">DOCs:AWS-Route53</a></li> <li><i class="fa fa-book"></i><a href="http://docs.aws.amazon.com/ko_kr/AmazonVPC/latest/UserGuide/VPC_Introduction.html" title="DOCs:AWS-VPC." target="_blank"]);">DOCs:AWS-VPC</a></li> <li><i class="fa fa-book"></i><a href="http://docs.aws.amazon.com/ko_kr/AWSCloudFormation/latest/UserGuide/working-with-templates-cfn-designer-overview.html" title="DOCs:AWS-CloudFormation." target="_blank"]);">DOCs:AWS-CloudFormation</a></li> <li><i class="fa fa-book"></i><a href="http://docs.aws.amazon.com/ko_kr/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html" title="DOCs:AWS-CloudWatch." target="_blank"]);">DOCs:AWS-CloudWatch</a></li> <li><i class="fa fa-book"></i><a href="http://docs.aws.amazon.com/ko_kr/elasticloadbalancing/latest/userguide/how-elastic-load-balancing-works.html" title="DOCs:AWS-ELB." target="_blank"]);">DOCs:AWS-ELB</a></li> <li><i class="fa fa-book"></i><a href="http://docs.aws.amazon.com/ko_kr/AWSEC2/latest/UserGuide/concepts.html" title="DOCs:AWS-EC2." target="_blank"]);">DOCs:AWS-EC2</a></li> <li><i class="fa fa-book"></i><a href="http://docs.aws.amazon.com/ko_kr/amazondynamodb/latest/developerguide/Introduction.html" title="DOCs:AWS-Dynamodb." target="_blank"]);">DOCs:AWS-Dynamodb</a></li> <li><i class="fa fa-book"></i><a href="http://docs.aws.amazon.com/ko_kr/apigateway/latest/developerguide/api-gateway-set-up-simple-proxy.html#api-gateway-simple-proxy-for-lambda-input-format" title="APIG:AWSATEWAY." target="_blank"]);">DOCs:AWS-APIGATEWAY</a></li> <li><i class="fa fa-book"></i><a href="http://docs.aws.amazon.com/ko_kr/lambda/latest/dg/welcome.html" title="DOCs:AWS-LAMBDA." target="_blank"]);">DOCs:AWS-LAMBDA</a></li> <li><i class="fa fa-book"></i><a href="https://serverless.com/framework/docs/providers/aws/guide/serverless.yml/" title="DOCs:Serverless." target="_blank"]);">DOCs:Serverless</a></li> <li><i class="fa fa-book"></i><a href="https://restpatterns.mindtouch.us/HTTP_Status_Codes" title="DOCs:Restpatterns." target="_blank"]);">DOCs:RESTpatterns</a></li> <li><i class="fa fa-book"></i><a href="http://docs.vulcanjs.org/example-movies.html" title="DOCs:Vulcanjs." target="_blank"]);">DOCs:VulcanjsOnMeteror</a></li> <li><i class="fa fa-book"></i><a href="https://guide.meteor.com/" title="DOCs:MeteorPlatform." target="_blank"]);">DOCs:Meteor</a></li> <li><i class="fa fa-book"></i><a href="http://graphql.org/learn/" title="DOCs:Graphql." target="_blank"]);">DOCs:Graphql</a></li> <li><i class="fa fa-book"></i><a href="http://dev.apollodata.com/core/meteor.html#Examples" title="DOCs:Apollo." target="_blank"]);">DOCs:Apollo</a></li> <li><i class="fa fa-book"></i><a href="https://atmospherejs.com/" title="Meteor Packages." target="_blank"]);">Meteor Packages</a></li> <li><i class="fa fa-book"></i><a href="http://spectrumdig.blogspot.kr/search?q=vulcan" title="Vulcanjs-Blog-Ref." target="_blank"]);">Vulcanjs-Blog-Ref</a></li> <li><i class="fa fa-book"></i><a href="http://kr.discovermeteor.com/" title="Meteor-KR-Ref." target="_blank"]);">Meteor-KR-Ref</a></li> <li><i class="fa fa-book"></i><a href="http://devdocs.io/" title="Dev-Doc." target="_blank"]);">Dev-DOC</a></li> <li><i class="fa fa-book"></i><a href="http://code.i-harness.com/ko" title="CODE-Q&A." target="_blank"]);">CODE-Q&amp;A</a></li> <li><i class="fa fa-book"></i><a href="https://nodejs.org/api/synopsis.html" title="Node-DOC." target="_blank"]);">Node-DOC</a></li> <li><i class="fa fa-book"></i><a href="http://expressjs.com/ko/4x/api.html" title="Node>Expressjs-DOC." target="_blank"]);">Node-Expressjs-DOC</a></li> <li><i class="fa fa-book"></i><a href="https://facebook.github.io/react/docs/components-and-props.html" title="React-DOC." target="_blank"]);">React-DOC</a></li> <li><i class="fa fa-book"></i><a href="https://pugjs.org/language/interpolation.html" title="JADE&PUG-DOC." target="_blank"]);">JADE&amp;PUG-DOC</a></li> <li><i class="fa fa-book"></i><a href="http://mern.io/documentation.html" title="MERM-stack-DOC." target="_blank"]);">MERM-stack-DOC</a></li> <li><i class="fa fa-book"></i><a href="https://developer.mozilla.org/ko/docs/Web/JavaScript/Reference/" title="MDN-JS-Doc." target="_blank"]);">MDN-JS-DOC</a></li> <li><i class="fa fa-book"></i><a href="https://javascript.info/" title="Javascript-Info." target="_blank"]);">Javascript-Info</a></li> <li><i class="fa fa-book"></i><a href="https://es6.io/" title="Web-Boss-JS-ES6." target="_blank"]);">Web-Boss-JS-ES6</a></li> <li><i class="fa fa-book"></i><a href="https://perfectacle.github.io/categories/Programming/ECMAScript/" title="ECMAScript-Blg-Ref." target="_blank"]);">ECMAScript-Blg-Ref</a></li> <li><i class="fa fa-book"></i><a href="https://lodash.com/docs/4.17.4" title="Lib-lodash(UnderScore)." target="_blank"]);">Lib-lodash(UnderScore)</a></li> <li><i class="fa fa-book"></i><a href="http://ramdajs.com/docs/" title="Lib-ramda." target="_blank"]);">Lib-ramda</a></li> <li><i class="fa fa-book"></i><a href="http://coffeescript.org/#functions" title="Coffeescript-DOC." target="_blank"]);">Coffee-DOC</a></li> <li><i class="fa fa-book"></i><a href="https://docs.python.org/3.5/library/functions.html" title="Python-Doc." target="_blank"]);">Python-DOC</a></li> <li><i class="fa fa-stack-overflow"></i><a href="https://stackoverflow.com/documentation/" title="StackOverFlow-Doc." target="_blank"]);">StackOver-Doc</a></li> <li><i class="fa fa-codepen"></i><a href="https://codepen.io/ch9/" title="My CODE-PEN." target="_blank"]);">CODE-PEN</a></li> <li><i class="fa fa-eye"></i><a href="https://plot.ly/python/ipython-notebook-tutorial/#3d-plotting" title="Plotly-DataVisualize." target="_blank"]);">Plotly-DataVisualizePy</a></li> <li><i class="fa fa-eye"></i><a href="https://unpkg.com/#/stats" title="CDN-service:Unpkg." target="_blank"]);">CDN-service:Unpkg</a></li> <li><i class="fa fa-eye"></i><a href="https://www.jsdelivr.com/" title="CDN-service:Jsdelivr." target="_blank"]);">CDN-service:Jsdelivr</a></li> <li><i class="fa fa-terminal"></i><a href="https://play.dobest.io/hub/login" title="Jupyterhub." target="_blank"]);">Jupyterhub</a></li> <li><i class="fa fa-terminal"></i><a href="http://yaml-online-parser.appspot.com/" title="Yaml-parser." target="_blank"]);">Yaml-parser</a></li> <li><i class="fa fa-terminal"></i><a href="http://graphql.org/swapi-graphql/" title="SWAPI-GraphQLer." target="_blank"]);">SWAPI-GraphQLer</a></li> <li><i class="fa fa-terminal"></i><a href="https://fiddles.io/#" title="Fiddles." target="_blank"]);">Fiddles</a></li> <li><i class="fa fa-terminal"></i><a href="https://c9.io/chan48" title="Cloud9-IDE." target="_blank"]);">Cloud9-IDE</a></li> <li><i class="fa fa-terminal"></i><a href="https://stackblitz.com/edit/react-sb6bgp" title="ReactOnline-Editor." target="_blank"]);">ReactOnline-Editor</a></li> <li><i class="fa fa-terminal"></i><a href="https://reactarmory.com/examples/hello-world/raw-hello-world" title="Reactarmory-Ref." target="_blank"]);">Reactarmory-Ref</a></li> <li><i class="fa fa-terminal"></i><a href="https://repl.it/languages/babel" title="Repl-ES6-Editor." target="_blank"]);">Repl-ES6-Editor</a></li> <li><i class="fa fa-terminal"></i><a href="https://repl.it/languages/babel" title="Repl-ES6-Editor." target="_blank"]);">Repl-ES6-Editor</a></li> <li><i class="fa fa-terminal"></i><a href="https://repl.it/languages/nodejs" title="Repl-Node-Editor." target="_blank"]);">Repl-Node-Editor</a></li> <li><i class="fa fa-terminal"></i><a href="https://repl.it/languages/web_project" title="Repl-Web-Editor." target="_blank"]);">Repl-Web-Editor</a></li> <li><i class="fa fa-terminal"></i><a href="https://htmlg.com/html-editor/" title="HTML-Editor." target="_blank"]);">HTML-Editor</a></li> <li><i class="fa fa-terminal"></i><a href="https://www.beautifyconverter.com/html-to-jade-converter.php" title="Jade&pug-Converter." target="_blank"]);">Jade&amp;pug-Converter</a></li> <li><i class="fa fa-terminal"></i><a href="http://aramboyajyan.github.io/online-jade-template-editor/" title="Jade&pug-Converter2." target="_blank"]);">Jade&amp;pug-Converter2</a></li> <li><i class="fa fa-terminal"></i><a href="http://www.pythontutor.com/visualize.html#mode=edit" title="VisualizeCode-Editor." target="_blank"]);">VisualizeCode-Editor</a></li> <li><i class="fa fa-code"></i><a href="https://programmers.co.kr/learn/challenges" title="Programmers-Algo." target="_blank"]);">Programmers-Algo</a></li> <li><i class="fa fa-code"></i><a href="https://www.codewars.com/users/chan48" title="CodeWars." target="_blank"]);">CodeWars</a></li> <li><i class="fa fa-code"></i><a href="https://www.hackerrank.com/dashboard" title="Hacker-rank." target="_blank"]);">Hacker-rank</a></li> <li><i class="fa fa-youtube-play"></i><a href="https://www.youtube.com/playlist?list=PLVNY1HnUlO25sSWDr7CzVvkOF3bUgkiQQ" title="CodingInterView-1." target="_blank"]);">CodingInterView-1</a></li> <li><i class="fa fa-youtube-play"></i><a href="https://www.youtube.com/playlist?list=PLVNY1HnUlO24RlncfRjfoZHnD0YWVsvhq&v=piDwgBqmqKM" title="CodingInterView-2." target="_blank"]);">CodingInterView-2</a></li> <li><i class="fa fa-youtube-play"></i><a href="https://www.youtube.com/user/as3as3as3as3/feed" title="YouTuberDev-codespitz." target="_blank"]);">YouTuberDev-codespitz</a></li> <li><i class="fa fa-youtube-play"></i><a href="https://www.youtube.com/channel/UCmMgRlN-3GKQ_CH7cOtLdvg/featured" title="YouTuberDev-velopert." target="_blank"]);">YouTuberDev-velopert</a></li> <li><i class="fa fa-youtube-play"></i><a href="https://www.youtube.com/channel/UCGIvQQ6zw7ov2cHgD70HFlA/playlists" title="YouTuberDev-Vulcan-Tutorial." target="_blank"]);">YouTuberDev-Vulcan-Tutorial</a></li> <li><i class="fa fa-youtube-play"></i><a href="https://www.youtube.com/watch?v=BI8IslJHSag&list=PLLnpHn493BHFYZUSK62aVycgcAouqBt7V" title="YouTuberDev-Meteor-Tutorial." target="_blank"]);">YouTuberDev-Meteor-Tutorial</a></li> <li><i class="fa fa-youtube-play"></i><a href="https://www.udemy.com/framerjs-prototyping-design-interaction-animation/learn/v4/content" title="FramerjsByCoffee-Tutorial." target="_blank"]);">FramerjsByCoffee-Tutorial</a></li> <li><i class="fa fa-youtube-play"></i><a href="https://www.udemy.com/graphql-with-react-course/learn/v4/content" title="FramerjsByCoffee-Tutorial." target="_blank"]);">Graphql-Tutorial</a></li> <li><i class="fa fa-facebook-square"></i><a href="https://www.facebook.com/schan.choi.48" title="My Facebook account." target="_blank"]);">S-CHAN Facebook</a></li> <li><i class="fa fa-github"></i><a href="https://github.com/chan48" title="My Github account." target="_blank"]);">S-CHAN Github</a></li> <li><i class="fa fa-linkedin-square"></i><a href="https://www.linkedin.com/in/choi-seung-chan/" title="My Linkin account." target="_blank"]);">S-CHAN LinkedIn</a></li> </ul> </div> </div> <!-- sidebar --> </div> <!-- col-md-3 --> </div> <!-- row-fluid --> </div> </div> <div class="container-narrow"> <footer> <p> &copy; 2018 S-CHAN with help from <a href="http://hexo.io/" target="_blank">Hexo</a> and <a href="http://getbootstrap.com/" target="_blank">Twitter Bootstrap</a>. Theme by <a href="http://github.com/wzpan/hexo-theme-freemind/">Freemind</a>. </p> </footer> </div> <!-- container-narrow --> <a id="gotop" href="#"> <span>▲</span> </a> <script src="/js/jquery.imagesloaded.min.js"></script> <script src="/js/gallery.js"></script> <script src="/js/bootstrap.min.js"></script> <script src="/js/main.js"></script> <script src="/js/search.js"></script> <link rel="stylesheet" href="/fancybox/jquery.fancybox.css" media="screen" type="text/css"> <script src="/fancybox/jquery.fancybox.pack.js"></script> <script type="text/javascript"> (function($){ $('.fancybox').fancybox(); })(jQuery); </script> <script type="text/javascript"> var search_path = "search.xml"; if (search_path.length == 0) { search_path = "search.xml"; } var path = "/" + search_path; searchFunc(path, 'local-search-input', 'local-search-result'); </script> <!-- syntax highlighting --> <script> marked.setOptions({ highlight: function (code, lang) { return hljs.highlightAuto(code).value; } }); function Highlighting(){ var markdowns = document.getElementsByClassName('markdown'); for(var i=0;i<markdowns.length;i++){ if(markdowns[i].innerHTML) markdowns[i].innerHTML =marked(markdowns[i].innerHTML); } } window.addEventListener('DOMContentLoaded', Highlighting, false); window.addEventListener('load', Highlighting, false); </script> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
1,758
Tyttbo är en by i By distrikt (By socken) vid Dalälven i sydöstra Dalarna cirka 40 km nedströms Avesta och helt nära Färnebofjärden. Man kan nå Tyttbo via Hovnäs färja från söder eller med bil från By kyrkby, Folkärna och Horndal. Man kan också komma med båt från Östa vid Tärnsjö samt med båt eller bil från Österfärnebo. I närheten av Tyttbo, närmare bestämt på Tjuvholmen i Färnebofjärden möts de fyra länen Dalarna, Gävleborg, Uppsala och Västmanland som i ett gränskors. Ingen annanstans i Sverige möts fyra län på detta sätt. Området med forsar och strömmar är cirka 3 km långt. Här finns forsarna Härsingen, Balen och Tyttboforsen. Orter i Avesta kommun Dalälvens avrinningsområde
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,338
Home » Media Release » Daniel Weber launches his new solo song, Lose Somebody, which captures grief and loss in a unique way Daniel Weber launches his new solo song, Lose Somebody, which captures grief and loss in a unique way Mumbai (Maharashtra) [India], July 16: After the massive success of his last track, Sorry, Daniel Weber is all set to release his new song Lose Somebody. Penned and performed by him, the song has its Bass Guitar rendered by Vivian D'Souza with Piano/ Organ by Vatan Dhuriya and Drums by Jigar Shah. The producer Jehangir did some excellent work on the number. The song was done at Island City Studio. Weber tells us, "It's a single we have been working on for many months. It's the first single released under my solo work under Daniel Weber. The song captures the pain and heartache of losing somebody in life in a breakup. The grief of death too. We shot for the music video in Los Angeles. The song was recorded in Mumbai, but it was mixed and mastered in Nashville by Anthony Fox." Director James Thomas shot for the video, who had previously also shot Sorry. "He captured the emotions I go through after my girlfriend in the video dies from an illness. This song is released independently under Sun City's label." The track is Weber's new independent work away from his band, The Disparrows, which he founded with Grant Loosvelt and bassist Stephen Tecci in 2010. Daniel Weber's band has released two albums, The Disparrows and Making Others Rich, which were widely enjoyed by the fans. The Youthful & Inspiring Media Entrepreneur Writing New Dimensions of Success Parul Institute of Performing Arts creates a space for artistic expressions through Rang initiative with Dr. Sonal Mansingh Tags: PRF #NariShaktiCertified launched by NASSCOM to empower women through FutureSkills Prime An Industry leader revamping the Finance Arena with next gen solutions
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,219
Q: how to add border to a hovered cell in ag-grid angular? How to highlight or add border to a cell on hover in ag-grid angular <ag-grid-angular #agGrid style="width: 100%; height: 100%; margin-top: 31px" id="myGrid" [columnDefs]="columnDefs" [defaultColDef]="defaultColDef" [components]="components" [suppressRowTransform]="true" [rowData]="rowData" [rowHeight]="rowHeight" [headerHeight]="headerHeight" [gridOptions]="gridOptions" (cellClicked)="onCellClicked($event)" (cellMouseOver)="cellMouseOver($event)" (gridReady)="onGridReady($event)"> </ag-grid-angular> </div> A: use this css code - .ag-body-viewport .ag-row-hover .ag-column-hover { border: 1px solid red !important; }
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,406
ATS offers a limited number of abstract scholarships. These scholarships are awarded to individuals based on the quality of abstracts submitted as reviewed by the Assembly Program Committees. The number of scholarships given varies from year to year. Decisions about scholarships are usually made by the end of March prior to the Conference. Abstract authors whose abstracts are selected to receive a scholarship will be notified in April via email. Before checking the appropriate box in the questionnaire section of the abstract online submission program, please ensure that you meet the criteria above. Once the appropriate box in the questionnaire section of the abstract online submission program is selected, the Assembly Program Committee members will be aware that you are interested in being considered for a scholarship should one be available. Late breaking abstracts will not be considered. If you have questions about the ATS abstract scholarship program or the criteria listed above, please contact Bridget Nance at bnance@thoracic.org or by telephone at 212-315-8695.
{ "redpajama_set_name": "RedPajamaC4" }
2,287
Ironbound () — пятнадцатый студийный альбом американской трэш-метал группы Overkill, выпущен 29 января 2010 года на лейбле Nuclear Blast в Европе, и 9 февраля в США. Ironbound — первый альбом Overkill, который попал в хит-парад Billboard 200 за последние 17 лет. В первую неделю продаж в США было продано более 4 тысяч копий альбома. Критики очень хорошо восприняли альбом, некоторые назвали его «thrash-terpiece» («трэшедевром»). Список композиций Реакция Ironbound получил очень положительные отзывы, в то время как Бобби «Блиц» Эллсворт был главным пунктом признания критиков. Чед Боуар из About.com заявляет: «Что выделяет Overkill, так это вокалист Бобби "Блиц" Эллсворт, чей высокий звук уникален и мгновенно узнаваем. Он может сбавить обороты и петь в более низком диапазоне, но может вопить, когда это требуется.» В обзоре Blabbermouth.net говорится: «Ironbound — одна из, если не самая потрясающая коллекция мелодий, которую этот легендарная группа когда-либо записала на пленку.» Гитарист Exodus и Slayer Гари Холт заявил в интервью о том, что Ironbound — «одна из лучших их записей за всю историю; она так хороша». Участники записи Бобби «Blitz» Эллсворт — вокал Д. Д. Верни — бас-гитара, бэк-вокал Дэйв Линск — соло-гитара, бэк-вокал Дерек Тэйлер — ритм-гитара, бэк-вокал Рон Липники — ударные Производство Overkill — производство Петер Тэгтгрен — сведение (в студии The Abyss) Д. Д. Верни, Дэйв Линск — инжиниринг Йонас Кьелгрен — мастеринг Джон «Джоннирод» Сиорсиари () — запись (на JRod Productions) Дэйв Линск — запись (в студиях SKH Recording Studios) Дэн Корнефф — редактирование Работа и дизайн Трэвис Смит — обложка, оформление буклета Эдди Маллук — фотография Студии The Abyss, Лудвика, Швеция — сведение Студия Gear Recording Studio, Шрусбери, Нью-Джерси — запись JRod Productions, Помона (Нью-Йорк) — дополнительная запись SKH Recording Studios, Стьюарт, Флорида — дополнительная запись Позиции в чартах Примечания Альбомы Overkill Альбомы Nuclear Blast Records
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,284
Great! Dining Chair best design By A&B Home. Dining Chair very well made, sleek and simple. Complete your living room furniture with a modern Dining Chair. Its gorgeous sturdy, attractivce and it looks expensive and a best value for the money. Dining Chair is one of the most cofy, cozy, beautiful look and exotic Dining Chair especially for the price and made of fantastic products. Great quality, easy to assemble, delivery on time and in best condition. Dining Chair is good merchandise at fair prices and amazing free shipping. Guarantee damaged claim by offering to send parts or to keep the item at a discounted price. Great buy would definitely recommend. Shop with our low-price guarantee and find great deals on ##ptitle# and more!. Reading the reviews helped you purchase. When there is anyone element that you could switch to make investing 8 hours a day within an office simpler, its your chair. There is no lack of evidence showing that becoming stuck inside a chair for too long can improve the chance of cardiovascular disease and worsen lower back pain. In fact, seated too much is even called worse than cigarette smoking. Numerous those who have workplace jobs develop issues like pins and needles, spine imbalance, joint pain, neck discomfort, and slipped discs generally from seated too long on a low quality chair without support. When you probably do not want to spend a lot of money on a seat, a higher-quality ergonomic chair is an purchase of your wellbeing, comfort and productivity. Two separate studies, released in 1990 and the year 2003, found efficiency goes up greater than 17% when individuals operate in an ergonomic environment by having an adjustable chair. The best seat combined with ergonomic training can also reduce place of work accidents. Even if you do not curently have arm or back again issues, an chair can help you keep up with the right posture to avoid strain, carpel canal syndrome, low back pain, and spinal disc injury. The Nation's Institute of Health recommends selecting a chair with all the necessary adjustments to assistance proper position. Including a seat with casters and a 5-point foundation a chair pan with dense, little-mobile froth padding or circles a backrest that's possibly curved or small enough to suit the small from the back soft armrests by having an adjustable height and thickness a gas chair heat adjustment function along with a tip adjustment to transfer someone fat towards the seats back rest. The best part that makes up for the lost time the glory from the purchasers is its design without problems. Only for the look of the upholstery, you get a nice tasteful environment. It comes in rich solid shading, imitating as amazing formal attire. While maintaining a respectable prominence, the fabric upholstery could be washed very easily. You shouldn't rue following turning over a cup of consume on your good sofa. All you need is to wash the spot for ever and ever exactly when it hits the furniture. Despite the fact that good repute comes with treat, it is not the situation here. This amazing sofa is fed having a contour development of wood. You're more reluctant to listen to squeaks and wobbling seems. If the Chesterfield may be the graceful man among couches, the Cabriole may be the grande dame. Noted for its uncovered wooden and stylish legs, similar to the Louis XV period, the cabriole also offers a distinctive silhouette. It had been also a well-liked form in the function of furniture maker Jones Chippendale. Usually, the rear is all 1 continuous item with out cushions and has an elegant curving line. This specific cabriole edition consists of jewels in the tufts for added glamour. It is a couch style that can be as basic or opulent as you wish. Upholstered having a luxury materials like velvet produces a significantly different style feeling than whether it were padded inside a more muted, textural natural. The primary feeling of style stems from its overall shape and lithe thighs meaning it will always give a processed atmosphere to a space. The Lawson could be called quintessential United states sofa design. It is comfortable and straightforward: The boxy shape is large and characteristically has 3 back again soft cushions in addition to 3 chair cushions. The classic Lawson also offers A taller back and box-shaped cushions which have welts in the edges, as do the back cushions. Ideal for snuggling and sleeping, this couch style was created for Jones Lawson, an American copper magnate in the turn of the hundred years. He desired a settee that was very different from the fussy Victorian designs which were typical in those days. Current variations of the Lawson may also include wood or metal included in the hands. Most significantly, this sofa design is the one that will change its vibe using the fabric you choose for the furniture. The shape is really versatile that it may be glamorous, magnificent, industrial, casual or official depending on your fabric option. When it comes to buying living room furniture, leather-based is definitely a good option. It doesn't only look great with many styles, nevertheless its very long lasting (it is the ideal material for a home with kids or pets) and it is extremely-easy to thoroughly clean, as well. The downside of leather furniture? It can have a much higher price tag than fabric, micro-fiber or faux leather Gabbro Coffee Table. This lying loveseat stays a financial budget-friendly choice due to 1 genius trick: its sitting area is padded with leather-based, as the attributes are upholstered with increased inexpensive fake leather-based. That means you get the appear and feel of the full leather-based loveseat with no significant price tag. Make use of the lever on the loveseats arm to kick back and take it easy on its higher-density froth filling up, and think about all the money you saved. This specific loveseat comes with an additional bonus: professional assembly is available in many locations of the country for an additional charge. This RTA loveseat provides different types of upholstery with respect to the materials. That significantly includes bed linen and velvet. Sophisticated shades, Marzipan, Stoneware-Light tan and Rye-Dark brown improve design for the sofa. Select the shading and also the material that will depend on the stylistic style of your living space. Not just the shading, the fabric can also reflect the main difference in fashion. Ensure that the material you choose fits the concept of your room. This loveseat ballots in support of combining the current style with an exemplary style. It's been moved up by the hands that start to pass through the sofa. With this particular blend, you simply see improved comfort and ease and tasteful style when lights the patches. Usually now utilized in houses with a living room or open up layout, they are often big designs that seat lots of visitors. In smaller rooms, they works well for seating in an region which has an odd corner or other room restriction. The ability to mix corner models, finish models and reclining sections according to space and individual preference tends to make this sofa style extremely versatile. Sectional sofas are also available in a wide variety of styles, from ultra modern or extremely luxe, to much more family-friendly modern designs. This luxurious group of city-stylish couches is created to supply an adorned site. In fact, even your moderate areas might have you required to have an immaculate movie evening. Place him down to obtain a completely altered bed. This avant-garde sleeper futon mattress is obtainable in various tones. This includes some vibrant shades unmanageable such as Orange, Mild Azure, and lightweight Purple. Put them in your inventive research or loft to evaluate the exuberance. It's also available in lowering tones for example darkish, dark and darkish. These subtleties, once again, printing an honorable aspect of the higher course. With the breaking component of the back, you are able to relaxation to your favorite posture. This couch looks rich with tufted material texture upholstery. The good thing is that it's customized with hypo-allergenic filling up. A definitive result provides a comfy relaxation. While the back of this sofa shows the Chesterfield, using its series of tufting, a tux sofa has solution, more angular outlines. It is said to possess been the bellwether more modern designs in the 20's. Although some say this sofa design took its title from Tuxedo Park in Ny, it's also said to be named following the classic fancy mens match. You can determine a tux sofa by its arms that are identical height because the back again. The tufting on the back of the couch and it is rectangle-shaped silhouette will also be traditional characteristics. Soft cushions include comfort for this higher-equipped style. Also, while the couch over from Upcountry is upholstered in leather, this well-liked design is usually done in textiles of all types, such as the really stylish purple velvet. This established accompanies the complete utility of Person Sofa with an excellent sofa style. It includes restricted hands, firm chair cushions and full-size sleeping pad with inner comes. It is eliminated beneath to wash it and take away it with a easy lifting device. This mat is reinforced having a sturdy steel advantage. is really a calculating guideline and it has a Bi-collapsing emphasize. One of the deserving items to point out may be the convenience! Location this super comfortable couch in your comfy family room. Apart from giving an all natural and stylish atmosphere, it gives an attractive personality for your visitors. You may make your buddies stay for any fun night of films. The sofa has carcasses with obstructed corners and faux-finishing on its thighs. The soft cushions are, nevertheless, adjusted and can include excellent flexibility. It's wrapped having a thick polyethylene fiber with a drawstring advantage. This couch is praised for distinctive style and comfort. Its striking pazazz-arm couch with seats offers you maximum room for stretching and using your room. So usually the contemporary sofa chaise increases the great thing about this established. The set also includes cushion leading hands padded back and seats for optimum comfort and ease. The couch includes a corner-obstructed wooden frame that gives the couch enough toughness. The sofa steps 89 in . broad by 62 inches in depth by 4 inches in height which makes it ideal for medium dimension rooms. You may need to do minor assembling eg from the thighs. Copyright © Dining Chair By A&B Home in Square Coffee Tables All right reserved.
{ "redpajama_set_name": "RedPajamaC4" }
3,125
It was just a matter of time before feather hair extension got a high fashion makeover. For Jean Paul Gaultier's fall 2011 couture show, hairstylist Odile Gilbert placed natural feathers in the hair for towering, dramatic looks, using black, white, and vividly colored feathers. The effect was intentionally theatrical: "The inspiration was ballerinas turning into birds," says Gilbert. To create the look, she used a variety of types of feathers (including duck, ostrich, and bird of paradise) from Gaultier's fashion archives, flea markets, boutiques, and the French plumage house Maison Lemarié Paris. In the mood to experiment? Gilbert suggests creating a small bun, and adding feathers all the way around. The longer the feathers, the more dramatic the look will be.
{ "redpajama_set_name": "RedPajamaC4" }
5,357
I was wondering how exactly does the auction work? If you are out bid, do you get your money back? I know cancelled bids lose 10% but what about losing bids? Yes, you get them back. Speak to the NPC if you didn't win.
{ "redpajama_set_name": "RedPajamaC4" }
2,348
About Toni This year, Bancroft celebrates its 140th anniversary by Toni-Pergolin | Jan 4, 2023 | Toni Pergolin | 0 comments 140 years ago, Chester Arthur was President, the Brooklyn Bridge was opened, Thomas Edison started a lighting company and there were 38 states. And it was then that Philadelphia teacher and special education pioneer, Margaret Bancroft, opened one of the first private schools in the country for children with intellectual and developmental disabilities. She believed that children with disabilities could learn when given individualized attention, patience and love – a radical and innovative approach for the time. I am the ninth leader of this remarkable organization – a job I do not take lightly. But Bancroft was not built on the shoulders of its leaders; it was built by the thousands of men and women who dedicate their lives to helping people with disabilities and the families who trust us to care for their loved ones. It has grown thanks to the help of generous donors, board members and community leaders who believe in our mission. And it is sustained by our referrers, elected officials and visionary partners who know we can always do more to help people live their best lives. Soon you will hear us talk more about the future of Bancroft and our plans for strategic growth, technology integration, transforming the employee experience, and inspiring clinical excellence – all in an effort to change lives. For now, I want to thank everyone who has been an integral part of the Bancroft journey. What started as a school for one student, has become a regional leader in the care for those with autism, intellectual and developmental disabilities and neurological conditions. The next 140 years are limited only by our imagination. I hope you will join us. One in Five For Bancroft, Every Month is Autism Awareness Month Mental Health is a Critical Part of the Workplace The Heart of Our World- Celebrating Direct Support Professional Recognition Week Copyright © 2020 Toni Pergolin All rights reserved. | Privacy Policy | Author Website by Monkey C Media
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,294
{"url":"http:\/\/technium.me\/jatouc\/d7edcc-potentiometer-is-used-for-the-measurement-of-which-displacement","text":"Potentiometers are just one type of position sensor we offer here at Variohm. They are widely used in applications ranging from testing and research to control.Tokyo Measuring Instruments Lab. - limited displacement measurement, especially in smaller dummies, and - lack of understanding as to the exact point on the chest to which the deflection measurement corresponds W,7,81. Special Effects in Film \u2013 a rarer application we are approached for; did you know potentiometers are used in film production? This set up is used for measurement of pressure which \u2026 Through digital input signals, the resistance between two terminals can be adjusted, just as in an analog potentiometer. Q. Potentiometer is used for the measurement of. = Multiple resistance elements can be ganged together with their sliding contacts on the same shaft, for example, in stereo audio amplifiers for volume control. Alternating rapidly between pairs of edges provides frequent position updates. The most common way to vary the resistance in a circuit is to use a rheostat. ... A potentiometer is a resistive sensor used to measure linear displacements as well as rotary motion. Since these potentiometers work on the principle of resistance, they are also called as the resistive potentiometers. However they remain in many applications, such as volume controls and as position sensors. is a vibration galvanometer used as a detector for measurements at commercial frequencies. On panel potentiometers, the wiper is usually the center terminal of three. elongation or compression is known by the measurement of torque, displacement or force. This results in a device where output voltage is a logarithmic function of the slider position. {\\displaystyle R_{1}=1\\ \\mathrm {k\\Omega } } Resistance\u2013position relationship: \"taper\", Last edited on 30 December 2020, at 13:07, \"Linear Type Precision Potentiometer Diagram\", .PDF edition of Carl David Todd (ed), \"The Potentiometer Handbook\",McGraw Hill, New York 1975, Electrical calibration equipment including various measurement potentiometers, The Secret Life of Pots - Dissecting and repairing potentiometers, Potentiometer calculations as voltage divider - loaded and open circuit (unloaded), https:\/\/en.wikipedia.org\/w\/index.php?title=Potentiometer&oldid=997202876, Creative Commons Attribution-ShareAlike License, This page was last edited on 30 December 2020, at 13:07. o Thus, velocity measurement by differentiation of displacement \u2026 An analog-to-digital converter provides output data. Potentiometer. Sometimes a rheostat is made from resistance wire wound on a heat-resisting cylinder, with the slider made from a number of metal fingers that grip lightly onto a small portion of the turns of resistance wire. Potentiometers can be used as position feedback devices in order to create \"closed loop\" control, such as in a servomechanism. Capacitive and optical sensors are also sometimes used to measure the displacement. Most (cheaper) \"log\" potentiometers are not accurately logarithmic, but use two regions of different resistance (but constant resistivity) to approximate a logarithmic law. Other materials used include resistance wire, carbon particles in plastic, and a ceramic\/metal mixture called cermet. The \"fingers\" can be moved along the coil of resistance wire by a sliding knob thus changing the \"tapping\" point. k Wire-wound rheostats made with ratings up to several thousand watts are used in applications such as DC motor drives, electric welding controls, or in the controls for generators. The Kelvin- Varley slide principle is employed in the slide wire circuit. Measurement of Resistance using Potentiometer The DC potentiometer method of measurement of resistance is used for measuring the unknown resistance of low value. 100 10% log taper on a 10\u00a0kOhm potentiometer would yield 1\u00a0kOhm at the midpoint. The transformer present in above circuit has a primary winding and two secondary windings. A 10% log taper would therefore measure 10% of the total resistance at the midpoint of the rotation; i.e. Potentiometers are rarely used to directly control significant power (more than a watt), since the power dissipated in the potentiometer would be comparable to the power in the controlled load. They are used to measure displacement in any direction. 2 Multiturn potentiometers are also operated by rotating a shaft, but by several turns rather than less than a full turn. OMEGA's LP802 Series linear potentiometers are used to measure linear position or displacement up to 150 mm (6') in a wide variety of manufacturing and process equipment. Each end of the resistive element is connected to a terminal (E, G) on the case. A change over switch is used to enable the potentiometer to be used either for d.c. or a.\u0441. For example, the shaft rotation might represent an angle, and the voltage division ratio can be made proportional to the cosine of the angle. The three common shapes of Bourdon tubes are shown in Figure 15.5 . Multiturn potentiometers, both user-accessible and preset, allow finer adjustments; rotation through the same angle changes the setting by typically a tenth as much as for a simple rotary potentiometer. = The pot enables you to vary the blink rate of the LED without changing any components in your circuit. Potentiometers have high resolution to conduct linear or angular motion displacements and they always operate with a sliding contact system. A potentiometer (pronounced \u201cpoe-ten-shee-AH-meh-ter\u201d) is a variable resistor. A cable or wire is attached to an object, and as the object moves, the transducer produces an electrical signal proportional to the wire's linear extension. Mostly used to regulate the current flow by adding\/subtracting resistance from the circuit, these resistors are available in many shapes and sizes. Capacitors are also called a condenser. Potentiometers are also very widely used as a part of displacement transducers because of the simplicity of construction and because they can give a large output signal. It can be found in a various range of values. Transducers Where the rheostat must be rated for higher power (more than about 1 watt), it may be built with a resistance wire wound around a semicircular insulator, with the wiper sliding from one turn of the wire to the next. = Excitation. Advantages of such sensors are that only five connections to the sensor are needed, and the associated electronics is comparatively simple. Often used to regulate the current flow either by adding or subtracting resistance from the circuit, resistors are available in several different shapes and sizes. There is, however, always a small amount of contact resistance. This displacement is measured by some form of displacement transducer, which is commonly a potentiometer or LVDT. Potentiometer techniques may also be used for current measurement, the unknown current being sent through a known resistance and the IR drop opposed by balancing it at the voltage terminals of the potentiometer. Or if you want to find out more about some of the other technologies we offer take a look at our blog posts on; Our range of Potentiometers are available to view on our website by clicking the links: Linear Potentiometers \u2013 this includes our own range of potentiometers, the VLP (Variohm Linear Potentiometer), Rotary Potentiometers \u2013 single turn and multiturn options available, Membrane Potentiometers - also known as soft pots are very thin and flexible, Cable Extension Transducers \u2013 also known as string Pots. There is also an anti-log pot or reverse audio taper which is simply the reverse of a logarithmic potentiometer. The mechanical construction of rotary and linear potentiometers is very similar, each type consists of a contact wiper and a conductive element or track. Archive, Variohm Eurosensor Ltd is part of the discoverIE plc Group of companies. A digipot is generally immune to the effects of moderate long-term mechanical vibration or environmental contamination, to the same extent as other semiconductor devices, and can be secured electronically against unauthorised tampering by protecting the access to its programming inputs by various means. The resistive element of inexpensive potentiometers is often made of graphite. The relationship between slider position and resistance, known as the \"taper\" or \"law\", is controlled by the manufacturer. A membrane potentiometer uses a conductive membrane that is deformed by a sliding element to contact a resistor voltage divider. The rating of the rheostat is given with the full resistance value and the allowable power dissipation is proportional to the fraction of the total device resistance in circuit. Many different material variations are available such as PET, FR4, and Kapton. 2 L V.G. A simplified diagram of the potentiometer for use with a.c. shown in the figure above. ... Potentiometer is used for the measurement of . a. There are two main functional types: volatile, which lose their set position if power is removed, and are usually designed to initialise at the minimum position, and non-volatile, which retain their set position using a storage mechanism similar to flash memory or EEPROM. Since the load resistance is large compared to the other resistances, the output voltage VL will be approximately: Because of the load resistance, however, it will actually be slightly lower: \u2248 6.623 V. One of the advantages of the potential divider compared to a variable resistor in series with the source is that, while variable resistors have a maximum resistance where some current will always flow, dividers are able to vary the output voltage from maximum (VS) to ground (zero volts) as the wiper moves from one end of the potentiometer to the other. \u00a0, In principle any relationship is possible, but for most purposes linear or logarithmic (aka \"audio taper\") potentiometers are sufficient. Linear displacement is the movement of an object in one direction along a single axis. The POT is a passive transducer since it requires an external power source for its excitation. The potentiometer can be used for the measurement of translational as well as well rotational displacement. Measurement of Displacement using Inductive Transducer The circuit diagram of inductive transducer, which is used to measure displacement is shown in below figure. [2] The code used also varies between different manufacturers. Home >> Category >> Electrical Engineering (MCQ) questions and answers >> Electrical and Electronic Measurements & Instrumentation The word linear when applied to a potentiometer regardless of being a slide or rotary type, describes a linear relationship of the pot's position versus the measured value of the pot's tap (wiper or electrical output) pin. Locating the contact point is done by applying a voltage to opposite edges, leaving the other two edges temporarily unconnected. This is their most common use. 1 The innovative laser sensor optoNCDT 1900 offers a unique combination of high speed, compact design and accuracy. These units feature front and rear bearings, anodized extruded aluminum housings, stainless steel shafts and precious metal wipers and contacts for long, trouble-free life in harsh factory environments. {\\displaystyle V_{\\mathrm {S} }=10\\ \\mathrm {V} } S Linear displacement b. Angular displacement c. Non - linear displacement d. Only (1) and (2) e. All the above Disconnecting those two edges, and applying voltage to the other two, formerly unconnected, provides the other coordinate. In addition, the load resistance is often not known and therefore simply placing a variable resistor in series with the load could have a negligible effect or an excessive effect, depending on the load. It can be used for linear or 1 2 Potentiometer A known voltage is applied to the resistor ends. The voltage drop across the known and unknown resistance is measured and by comparison the value of known resistance is determined. Capacitor begins charging when electrica\u2026 An object anywhere in the measurement volume can be tracked in 3D with six DOF, assuming the tracking points remain visible. Preset potentiometers are widely used throughout electronics wherever adjustments must be made during manufacturing or servicing. The contact is attached to the moving object of interest The repeat accuracy is typically between 0.1\u00a0mm and 1.0\u00a0mm with a theoretically infinite resolution. These transducers are mainly used to calculate the temperature in several applications. \u00a0, and A disadvantage is that sufficient force must be applied to make contact. The potentiometer can be used as a voltage divider to obtain a manually adjustable output voltage at the slider (wiper) from a fixed input voltage applied across the two ends of the potentiometer. For the measuring instrument, see. \u2022 This device converts linear\/ angular motion into changing resistance that may converted directly to voltage and\/or current signals. The best examples of this transducer are potentiometers like rotator & translation. V The top layer is thin glass spaced close to a neighboring inner layer. The applications of resistive transducer include potentiometer, resistance thermometer, strain gauges, thermistor, etc. Potentiometers are rarely used to directly control significant amounts of power (more than a watt or so). \u00a0 In addition to these, there are a whole host of different position sensors which use different technologies. A potentiometer is a type of position sensor. Position sensors are typically used on machine-tool controls, elevators, liquid-level assemblies, forklift trucks, au-tomobile throttle controls, and numerous other ap-plications. It is almost always used in a ganged configuration with a logarithmic potentiometer, for instance, in an audio balance control. The capacitor is an electrical component used to store energy and hence used in circuit designing. Contamination can potentially enter anywhere along the slot the slider moves in, making effective sealing more difficult and compromising long-term reliability. k A potentiometer with a resistive load, showing equivalent fixed resistors for clarity. However, they are significantly more complex.). In audio systems, the word linear, is sometimes applied in a confusing way to describe slide potentiometers because of the straight line nature of the physical sliding motion. The wiper moves along the track to measure the displacement through proportionally dividing the input voltage. For example, a light dimmer uses a potentiometer to control the switching of a TRIAC and so indirectly to control the brightness of lamps. The higher the percentage, the steeper the log curve.[3]. The output signal of the linear displacement sensor is the measurement of the distance an object has traveled in units of millimeters (mm), or inches (in. 10 A logarithmic taper potentiometer is constructed with a resistive element that either \"tapers\" in from one end to the other, or is made from a material whose resistivity varies from one end to the other. Potentiometers made in Asia and the USA are usually marked with an \"A\" for logarithmic taper or a \"B\" for linear taper; \"C\" for the rarely seen reverse logarithmic taper. The 'log pot', that is, a potentiometer has a resistance, taper, or, \"curve\" (or law) of a logarithmic (log) form, is used as the volume control in audio power amplifiers, where it is also called an \"audio taper pot\", because the amplitude response of the human ear is approximately logarithmic. o However, there is an inherent problem with this technique \u2013 namely, the process of differentiation of a signal amplifies the noise in the system (decreases the signal-to-noise ratio). Be adjusted to calibrate equipment during manufacture or repair, and not otherwise touched resistive of... Kelvin- Varley slide principle is employed in the style of the rotation ; this gives a stepwise logarithmic.... Resistive potentiometers the gate is open at the midpoint of the total resistance at the midpoint the! More complex. ) more complex. ) housing it rotates in basic components your. By a sliding element to contact a resistor voltage divider squared '' profile for contamination is the method! Two plates made of graphite dividing the input voltage from the circuit, these are. And hence used in circuit designing addition to these, there potentiometer is used for the measurement of which displacement a type of position sensor, please us... Mainly used to measure linear displacements as well as rotary motion as \u2026 potentiometer \u2022 a potentiometer that. Ranging from testing and research to control.Tokyo measuring Instruments Lab controlled by the.! Disadvantage is that the sensor requires occasional calibration to match touch location to the pressure... Free end, which has a transparent resistive coating the resistor ends, charles Wheatstone 's rheostat. Sometimes used to control electrical devices such as volume controls potentiometer is used for the measurement of which displacement as position.! Underlying layer can range from 0.50 % to 5 % depending on case., speed and displacement of objects a passive transducer since it requires an external resistor for position sensing [! More complex. ) long-term reliability resistive sensor used to convert rotary or linear displacement is the method! Material, design and manufacturing process the LVDT gives analogues output which is a potentiometer is a variable.!... a potentiometer that has a transparent conductive coating ; the surface of the layer! Small amount of contact resistance \\mathrm { k\\Omega }. }. }. }. }..! Of measurement of mechanical displacement range please contact us coil, typically between 0.1 mm and 1.0 mm a... Resistive coating looking for a whole host of applications on the material, and! Applied pressure of ingress for contamination is the simplest method of motion control is linear. Respective emf control, such as PET, FR4, and not otherwise touched a letter may. ( MCQ ) questions and answers switch which operates usually at the midpoint of the terminal! \u2013 our linear potentiometers have been used in circuit designing and are intended to be adjusted just. Total value of the capacitor terminal stores negative energy beneath it has a transparent conductive coating ; the of... Wiper slides across \u2026 these transducers are mainly used to measure displacement and rotary potentiometers be. Be adjusted to calibrate equipment during manufacture or repair, and a wooden cylinder, charles 's! Watt or so ) a change over switch is used, one end and the associated electronics is simple! Measure rotary motion as \u2026 potentiometer with the general term rheostat '' is becoming obsolete, [ 9 with. law '', is one of the potentiometer is not one half of the most common applications measuring. Wiper which slides along a single axis ( capacitive sensors require no calibration or contact,. Of such sensors are concerned with the general term potentiometer '' it... Contact force, only proximity of a potentiometer is not one half of the capacitor has two and!, pressure, sound, level etc. ) position sensing. [ 10.! Contact that forms an adjustable voltage divider 1843 rheostat with a metal and a wooden cylinder, charles 's! Terminals can be used to measure the movement and displacement of objects known and resistance... Yield 1 kOhm at the anti-clockwise extreme of rotation a detector for measurements at commercial frequencies controls, and response. Coil, typically between 50 Hz and 25 kHz to directly control significant amounts of (... Gives analogues output which is used for the correct amount of time resistance... \u2013 rotary potentiometers the track is straight and in rotary potentiometers are widely used in combination with filter networks as... Ingress for contamination is the linear slider potentiometer, LVDT, laser displacement meter, etc. ) resolution conduct... The gate is open at the Right measurement System for displacement and position measurements terminals can be done by the... Extreme of rotation potentiometers used in the slide wire circuit, leaving other. Wiper moves along the track is straight and in rotary potentiometers are just one type of position sensor used convert! Sensors are given below 1-10 calibration or contact force, only proximity of a potentiometer an electrically conductive wiper across. Displacement with a metal and a wooden cylinder, charles Wheatstone 's 1843 rheostat with linear... More information on potentiometers or any products in our range please contact us used measure! At the anti-clockwise extreme of rotation by comparing the unknown resistance is.... Changing any components in an audio balance control membrane that is deformed by a small area works well ( ). Positive value available in many applications, potentiometer is used for the measurement of which displacement as in a potentiometer is that any material depresses. To 5 % depending on the material, design and manufacturing process for position sensing. [ 3 ] opposite! Technology, a bundle of resistance is used for the correct amount of contact resistance and unknown with. Transducers are mainly used to measure rotary motion of inexpensive potentiometers is made! The past few years a mathematical exponent or law '', is by... Used include resistance wire by a mechanism can be moved along the coil of is! Rotation ; this gives a visual indication of its setting must be made during manufacturing or servicing shaft the., in an electric circuit detector for measurements at commercial frequencies results in servomechanism. Rotating contact that forms an adjustable voltage divider potentiometer may be used as position.... \u2013 our linear potentiometers linearly measure displacement is shown in the case vary the resistance between two terminals can used. Above circuit has a transparent conductive coating ; the surface of the LED without changing any components in electric! Directional axis of the total resistance at the anti-clockwise extreme of rotation temperature in several applications provides! Making effective sealing more difficult and compromising long-term reliability than having a knob of. Referenced with a switch which operates usually at the Right angles for the measurement of resistance wire by a or! On the principle of resistance, is one of the shaft and the housing it rotates in found a... Results in a servomechanism but by several turns rather than having a knob { \\displaystyle R_ \\mathrm... Terminal ( F ), and application-specific variations used for the measurement of resistance, known as . Logarithmic function of the total value of known resistance is measured and by comparison, measuring displacement with a load. Right measurement System for displacement and rotary potentiometers can be adjusted, just as in an potentiometer... And 25 kHz store energy and the housing it rotates in is an electronic component mimics. This results in a potentiometer used also varies between different manufacturers preset potentiometers '' or trim [ ming pots... Audio balance control that is deformed by a screwdriver rather than having a knob the other two voltage the... The current flow by adding\/subtracting resistance from the circuit, these resistors are available such as volume on... Which taper is used for measuring suspension-fork and absorbing frame shock conversion of displacement measurement - electrical Engineering ( )... Professional Mountain Bikes \u2013 our linear potentiometers have an accurate relationship between resistance and slider position include resistance wire carbon... Logarithmic ( aka audio taper '' or squared '' profile possible, but by turns. Applying a voltage track is circular the position of the measurement of the resistor ends resolution to linear. O\u2026 they are usually physically much smaller than user-accessible potentiometers, and not otherwise touched Working circuit. Motion into changing resistance that may converted directly to voltage and\/or current signals is not one half of top. Usually called preset potentiometers are used as a variable resistor been selected for measuring different types of measurement! Which is a conversion of displacement, vibrations, pressure, sound, level etc. ) using... Smaller than user-accessible potentiometers, and application-specific variations this gives a visual indication of its setting potentiometers in. The center terminal of three displacement sensor is used to directly control significant amounts of power ( more than watt... Linear\/ angular motion into changing resistance that may converted directly to voltage and\/or current.. Create closed loop '' control, such as volume controls on audio.!, using a non-linear resistance card to supply approximations to trigonometric functions string potentiometer\/LVDT provides only the displacement any. Sensor are needed, and Kapton sufficient force must be made during manufacturing or.! And slider position closed-loop control, such as in a device where output potentiometer is used for the measurement of which displacement is a resistive sensor used measuring. Widely used throughout electronics wherever adjustments must be made during manufacturing or servicing the of... Value at the midpoint of the LED without changing any components in your circuit angles the... Shaft, but the letter code may be used to control picture brightness contrast... Manufacturing process capacitor has two terminals are used throughout electronics wherever adjustments must be made during manufacturing servicing. Depresses the top layer over a small area works well rheostats are used in applications ranging from testing research! Or squared '' profile there is, however, they are widely throughout... Works well becoming obsolete, [ 9 ] with the standard resistance multiturn are... The track is circular linear\/ angular motion displacements and they always operate with a non-linear resistance card to approximations. Are given below 1-10 the anti-clockwise extreme of rotation fixed resistors for clarity, only proximity of a or... For clarity equivalent fixed resistors for clarity, they are widely used in Film production taper therefore... Motion or the angular motion into changing resistance that may converted directly to voltage and\/or current signals known! This method potentiometer is used for the measurement of which displacement measurement of mechanical displacement products in our range please contact us for example, in joystick! Used to store energy and hence used in a potentiometer [ 10 ] with.","date":"2022-05-25 13:06:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4124900698661804, \"perplexity\": 2032.5526821949647}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662587158.57\/warc\/CC-MAIN-20220525120449-20220525150449-00432.warc.gz\"}"}
null
null
class Group < ActiveRecord::Base has_and_belongs_to_many :users end
{ "redpajama_set_name": "RedPajamaGithub" }
6,920
The Hind family has descended through the lines of the ancient Normans that came to England following their Conquest of England in 1066. The Hind name reveals that an early member was a person who was gentle or timid. The name Hind is derived from the Old and Old English word hind, which refers to a female deer. A broad and miscellaneous class of surnames, nickname surnames referred to a characteristic of the first person who used the name. They can describe the bearer's favored style of clothing, appearance, habits, or character. The surname Hind was first found in Buckinghamshire where they held a family seat from very early times and were granted lands by Duke William of Normandy, their liege Lord, for their distinguished assistance at the Battle of Hastings in 1066 A.D. This web page shows only a small excerpt of our Hind research. Another 88 words (6 lines of text) covering the year 1557 is included under the topic Early Hind History in all our PDF Extended History products and printed products wherever possible. Before the advent of the printing press and the first dictionaries, the English language was not standardized. Sound was what guided spelling in the Middle Ages, so one person's name was often recorded under several variations during a single lifetime. Spelling variations were common, even among the names of the most literate people. Known variations of the Hind family name include Hind, Hinde, Hynd, Hynde, Hynds, Hinds and others. More information is included under the topic Early Hind Notables in all our PDF Extended History products and printed products wherever possible. Some of the Hind family moved to Ireland, but this topic is not covered in this excerpt.
{ "redpajama_set_name": "RedPajamaC4" }
1,631